Test Report: KVM_Linux_crio 21409

                    
                      85b41f691a12e65aa248bccfcbb3dd5af1b8ee95:2025-12-08:42683
                    
                

Test fail (3/437)

Order failed test Duration
46 TestAddons/parallel/Ingress 156.51
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 369.63
345 TestPreload 150.68
x
+
TestAddons/parallel/Ingress (156.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-301052 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-301052 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-301052 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [ef7f12e8-972f-418c-8608-d62b63b98950] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [ef7f12e8-972f-418c-8608-d62b63b98950] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004053876s
I1208 03:42:56.034351  129900 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-301052 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.588071322s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-301052 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-301052 -n addons-301052
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-301052 logs -n 25: (1.054055813s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-232951                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-232951 │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │ 08 Dec 25 03:39 UTC │
	│ start   │ --download-only -p binary-mirror-485333 --alsologtostderr --binary-mirror http://127.0.0.1:38203 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-485333 │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │                     │
	│ delete  │ -p binary-mirror-485333                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-485333 │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │ 08 Dec 25 03:39 UTC │
	│ addons  │ disable dashboard -p addons-301052                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │                     │
	│ addons  │ enable dashboard -p addons-301052                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │                     │
	│ start   │ -p addons-301052 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │ 08 Dec 25 03:42 UTC │
	│ addons  │ addons-301052 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
	│ addons  │ addons-301052 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
	│ addons  │ enable headlamp -p addons-301052 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
	│ addons  │ addons-301052 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
	│ addons  │ addons-301052 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
	│ ssh     │ addons-301052 ssh cat /opt/local-path-provisioner/pvc-7dfb495a-6399-4db8-a94c-9302cbd53b7e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
	│ addons  │ addons-301052 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:43 UTC │
	│ addons  │ addons-301052 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
	│ ip      │ addons-301052 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
	│ addons  │ addons-301052 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
	│ addons  │ addons-301052 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:42 UTC │
	│ ssh     │ addons-301052 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │                     │
	│ addons  │ addons-301052 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:42 UTC │ 08 Dec 25 03:43 UTC │
	│ addons  │ addons-301052 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:43 UTC │ 08 Dec 25 03:43 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-301052                                                                                                                                                                                                                                                                                                                                                                                         │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:43 UTC │ 08 Dec 25 03:43 UTC │
	│ addons  │ addons-301052 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:43 UTC │ 08 Dec 25 03:43 UTC │
	│ addons  │ addons-301052 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:43 UTC │ 08 Dec 25 03:43 UTC │
	│ addons  │ addons-301052 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:43 UTC │ 08 Dec 25 03:43 UTC │
	│ ip      │ addons-301052 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-301052        │ jenkins │ v1.37.0 │ 08 Dec 25 03:45 UTC │ 08 Dec 25 03:45 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 03:39:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 03:39:52.062784  130870 out.go:360] Setting OutFile to fd 1 ...
	I1208 03:39:52.063091  130870 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:39:52.063102  130870 out.go:374] Setting ErrFile to fd 2...
	I1208 03:39:52.063108  130870 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:39:52.063330  130870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	I1208 03:39:52.063877  130870 out.go:368] Setting JSON to false
	I1208 03:39:52.064772  130870 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1336,"bootTime":1765163856,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 03:39:52.064834  130870 start.go:143] virtualization: kvm guest
	I1208 03:39:52.066681  130870 out.go:179] * [addons-301052] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1208 03:39:52.067918  130870 out.go:179]   - MINIKUBE_LOCATION=21409
	I1208 03:39:52.067958  130870 notify.go:221] Checking for updates...
	I1208 03:39:52.070056  130870 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 03:39:52.071068  130870 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	I1208 03:39:52.072058  130870 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	I1208 03:39:52.073109  130870 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 03:39:52.074138  130870 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 03:39:52.075357  130870 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 03:39:52.107727  130870 out.go:179] * Using the kvm2 driver based on user configuration
	I1208 03:39:52.108828  130870 start.go:309] selected driver: kvm2
	I1208 03:39:52.108843  130870 start.go:927] validating driver "kvm2" against <nil>
	I1208 03:39:52.108855  130870 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 03:39:52.109633  130870 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 03:39:52.109875  130870 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 03:39:52.109919  130870 cni.go:84] Creating CNI manager for ""
	I1208 03:39:52.109982  130870 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 03:39:52.109994  130870 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1208 03:39:52.110058  130870 start.go:353] cluster config:
	{Name:addons-301052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-301052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1208 03:39:52.110186  130870 iso.go:125] acquiring lock: {Name:mkd550ce23b107beb8be7edee8182e09aac2818e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 03:39:52.111618  130870 out.go:179] * Starting "addons-301052" primary control-plane node in "addons-301052" cluster
	I1208 03:39:52.112643  130870 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 03:39:52.112677  130870 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21409-125868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1208 03:39:52.112702  130870 cache.go:65] Caching tarball of preloaded images
	I1208 03:39:52.112819  130870 preload.go:238] Found /home/jenkins/minikube-integration/21409-125868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1208 03:39:52.112833  130870 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 03:39:52.113259  130870 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/config.json ...
	I1208 03:39:52.113290  130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/config.json: {Name:mk0a5f52b95fc620886c94a38f9e732f44198aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 03:39:52.113507  130870 start.go:360] acquireMachinesLock for addons-301052: {Name:mka95432fbbe0b4b61b444ff6ef3750992988c0d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1208 03:39:52.113582  130870 start.go:364] duration metric: took 55.424µs to acquireMachinesLock for "addons-301052"
	I1208 03:39:52.113608  130870 start.go:93] Provisioning new machine with config: &{Name:addons-301052 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-301052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 03:39:52.113683  130870 start.go:125] createHost starting for "" (driver="kvm2")
	I1208 03:39:52.115030  130870 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1208 03:39:52.115184  130870 start.go:159] libmachine.API.Create for "addons-301052" (driver="kvm2")
	I1208 03:39:52.115224  130870 client.go:173] LocalClient.Create starting
	I1208 03:39:52.115356  130870 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca.pem
	I1208 03:39:52.190449  130870 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/cert.pem
	I1208 03:39:52.285135  130870 main.go:143] libmachine: creating domain...
	I1208 03:39:52.285160  130870 main.go:143] libmachine: creating network...
	I1208 03:39:52.286570  130870 main.go:143] libmachine: found existing default network
	I1208 03:39:52.286788  130870 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1208 03:39:52.287305  130870 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d3a740}
	I1208 03:39:52.287421  130870 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-301052</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1208 03:39:52.293294  130870 main.go:143] libmachine: creating private network mk-addons-301052 192.168.39.0/24...
	I1208 03:39:52.362989  130870 main.go:143] libmachine: private network mk-addons-301052 192.168.39.0/24 created
	I1208 03:39:52.363307  130870 main.go:143] libmachine: <network>
	  <name>mk-addons-301052</name>
	  <uuid>5a4d4462-57b6-4f17-b60d-4951aaa68ccb</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:da:82:a0'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1208 03:39:52.363353  130870 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052 ...
	I1208 03:39:52.363377  130870 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21409-125868/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1208 03:39:52.363389  130870 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21409-125868/.minikube
	I1208 03:39:52.363492  130870 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21409-125868/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21409-125868/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso...
	I1208 03:39:52.659378  130870 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa...
	I1208 03:39:52.730466  130870 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/addons-301052.rawdisk...
	I1208 03:39:52.730515  130870 main.go:143] libmachine: Writing magic tar header
	I1208 03:39:52.730542  130870 main.go:143] libmachine: Writing SSH key tar header
	I1208 03:39:52.730625  130870 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052 ...
	I1208 03:39:52.730683  130870 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052
	I1208 03:39:52.730706  130870 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052 (perms=drwx------)
	I1208 03:39:52.730718  130870 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21409-125868/.minikube/machines
	I1208 03:39:52.730727  130870 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21409-125868/.minikube/machines (perms=drwxr-xr-x)
	I1208 03:39:52.730739  130870 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21409-125868/.minikube
	I1208 03:39:52.730748  130870 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21409-125868/.minikube (perms=drwxr-xr-x)
	I1208 03:39:52.730758  130870 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21409-125868
	I1208 03:39:52.730767  130870 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21409-125868 (perms=drwxrwxr-x)
	I1208 03:39:52.730779  130870 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1208 03:39:52.730789  130870 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1208 03:39:52.730798  130870 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1208 03:39:52.730808  130870 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1208 03:39:52.730817  130870 main.go:143] libmachine: checking permissions on dir: /home
	I1208 03:39:52.730826  130870 main.go:143] libmachine: skipping /home - not owner
	I1208 03:39:52.730831  130870 main.go:143] libmachine: defining domain...
	I1208 03:39:52.732142  130870 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-301052</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/addons-301052.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-301052'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1208 03:39:52.739873  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:cd:f2:86 in network default
	I1208 03:39:52.740548  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:39:52.740568  130870 main.go:143] libmachine: starting domain...
	I1208 03:39:52.740573  130870 main.go:143] libmachine: ensuring networks are active...
	I1208 03:39:52.741372  130870 main.go:143] libmachine: Ensuring network default is active
	I1208 03:39:52.741792  130870 main.go:143] libmachine: Ensuring network mk-addons-301052 is active
	I1208 03:39:52.742438  130870 main.go:143] libmachine: getting domain XML...
	I1208 03:39:52.743573  130870 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-301052</name>
	  <uuid>e8d346d2-27a3-494e-bffe-43f0ee3efd1d</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/addons-301052.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:58:bd:9c'/>
	      <source network='mk-addons-301052'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:cd:f2:86'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1208 03:39:54.062727  130870 main.go:143] libmachine: waiting for domain to start...
	I1208 03:39:54.064042  130870 main.go:143] libmachine: domain is now running
	I1208 03:39:54.064061  130870 main.go:143] libmachine: waiting for IP...
	I1208 03:39:54.064865  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:39:54.065288  130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
	I1208 03:39:54.065305  130870 main.go:143] libmachine: trying to list again with source=arp
	I1208 03:39:54.065578  130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
	I1208 03:39:54.065640  130870 retry.go:31] will retry after 235.409964ms: waiting for domain to come up
	I1208 03:39:54.303125  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:39:54.303631  130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
	I1208 03:39:54.303653  130870 main.go:143] libmachine: trying to list again with source=arp
	I1208 03:39:54.303918  130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
	I1208 03:39:54.303972  130870 retry.go:31] will retry after 342.161147ms: waiting for domain to come up
	I1208 03:39:54.647715  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:39:54.648373  130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
	I1208 03:39:54.648398  130870 main.go:143] libmachine: trying to list again with source=arp
	I1208 03:39:54.648725  130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
	I1208 03:39:54.648774  130870 retry.go:31] will retry after 327.760524ms: waiting for domain to come up
	I1208 03:39:54.978285  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:39:54.978804  130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
	I1208 03:39:54.978819  130870 main.go:143] libmachine: trying to list again with source=arp
	I1208 03:39:54.979103  130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
	I1208 03:39:54.979147  130870 retry.go:31] will retry after 370.383597ms: waiting for domain to come up
	I1208 03:39:55.350752  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:39:55.351279  130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
	I1208 03:39:55.351297  130870 main.go:143] libmachine: trying to list again with source=arp
	I1208 03:39:55.351669  130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
	I1208 03:39:55.351714  130870 retry.go:31] will retry after 716.591556ms: waiting for domain to come up
	I1208 03:39:56.069747  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:39:56.070319  130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
	I1208 03:39:56.070336  130870 main.go:143] libmachine: trying to list again with source=arp
	I1208 03:39:56.070628  130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
	I1208 03:39:56.070667  130870 retry.go:31] will retry after 595.081797ms: waiting for domain to come up
	I1208 03:39:56.667379  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:39:56.667927  130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
	I1208 03:39:56.667961  130870 main.go:143] libmachine: trying to list again with source=arp
	I1208 03:39:56.668217  130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
	I1208 03:39:56.668257  130870 retry.go:31] will retry after 782.672431ms: waiting for domain to come up
	I1208 03:39:57.452489  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:39:57.453015  130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
	I1208 03:39:57.453034  130870 main.go:143] libmachine: trying to list again with source=arp
	I1208 03:39:57.453333  130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
	I1208 03:39:57.453377  130870 retry.go:31] will retry after 1.054589976s: waiting for domain to come up
	I1208 03:39:58.509708  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:39:58.510329  130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
	I1208 03:39:58.510348  130870 main.go:143] libmachine: trying to list again with source=arp
	I1208 03:39:58.510642  130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
	I1208 03:39:58.510681  130870 retry.go:31] will retry after 1.806097252s: waiting for domain to come up
	I1208 03:40:00.319679  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:00.320204  130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
	I1208 03:40:00.320223  130870 main.go:143] libmachine: trying to list again with source=arp
	I1208 03:40:00.320504  130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
	I1208 03:40:00.320549  130870 retry.go:31] will retry after 1.994021743s: waiting for domain to come up
	I1208 03:40:02.316362  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:02.316943  130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
	I1208 03:40:02.316970  130870 main.go:143] libmachine: trying to list again with source=arp
	I1208 03:40:02.317320  130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
	I1208 03:40:02.317373  130870 retry.go:31] will retry after 1.993048808s: waiting for domain to come up
	I1208 03:40:04.311748  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:04.312345  130870 main.go:143] libmachine: no network interface addresses found for domain addons-301052 (source=lease)
	I1208 03:40:04.312370  130870 main.go:143] libmachine: trying to list again with source=arp
	I1208 03:40:04.312641  130870 main.go:143] libmachine: unable to find current IP address of domain addons-301052 in network mk-addons-301052 (interfaces detected: [])
	I1208 03:40:04.312677  130870 retry.go:31] will retry after 3.244643549s: waiting for domain to come up
	I1208 03:40:07.559217  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:07.559745  130870 main.go:143] libmachine: domain addons-301052 has current primary IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:07.559759  130870 main.go:143] libmachine: found domain IP: 192.168.39.103
	I1208 03:40:07.559781  130870 main.go:143] libmachine: reserving static IP address...
	I1208 03:40:07.560170  130870 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-301052", mac: "52:54:00:58:bd:9c", ip: "192.168.39.103"} in network mk-addons-301052
	I1208 03:40:07.754557  130870 main.go:143] libmachine: reserved static IP address 192.168.39.103 for domain addons-301052
	I1208 03:40:07.754586  130870 main.go:143] libmachine: waiting for SSH...
	I1208 03:40:07.754606  130870 main.go:143] libmachine: Getting to WaitForSSH function...
	I1208 03:40:07.757706  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:07.758196  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:minikube Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:07.758233  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:07.758466  130870 main.go:143] libmachine: Using SSH client type: native
	I1208 03:40:07.758834  130870 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1208 03:40:07.758852  130870 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1208 03:40:07.871547  130870 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 03:40:07.872006  130870 main.go:143] libmachine: domain creation complete
	I1208 03:40:07.873598  130870 machine.go:94] provisionDockerMachine start ...
	I1208 03:40:07.875843  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:07.876263  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:07.876288  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:07.876458  130870 main.go:143] libmachine: Using SSH client type: native
	I1208 03:40:07.876654  130870 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1208 03:40:07.876664  130870 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 03:40:07.985495  130870 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1208 03:40:07.985536  130870 buildroot.go:166] provisioning hostname "addons-301052"
	I1208 03:40:07.988547  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:07.988947  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:07.988970  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:07.989150  130870 main.go:143] libmachine: Using SSH client type: native
	I1208 03:40:07.989360  130870 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1208 03:40:07.989371  130870 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-301052 && echo "addons-301052" | sudo tee /etc/hostname
	I1208 03:40:08.116391  130870 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-301052
	
	I1208 03:40:08.119377  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.119802  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:08.119839  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.120010  130870 main.go:143] libmachine: Using SSH client type: native
	I1208 03:40:08.120209  130870 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1208 03:40:08.120230  130870 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-301052' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-301052/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-301052' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 03:40:08.240094  130870 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 03:40:08.240143  130870 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-125868/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-125868/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-125868/.minikube}
	I1208 03:40:08.240177  130870 buildroot.go:174] setting up certificates
	I1208 03:40:08.240191  130870 provision.go:84] configureAuth start
	I1208 03:40:08.243207  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.243574  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:08.243593  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.245767  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.246130  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:08.246150  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.246286  130870 provision.go:143] copyHostCerts
	I1208 03:40:08.246366  130870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-125868/.minikube/cert.pem (1123 bytes)
	I1208 03:40:08.246507  130870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-125868/.minikube/key.pem (1675 bytes)
	I1208 03:40:08.246584  130870 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-125868/.minikube/ca.pem (1078 bytes)
	I1208 03:40:08.246648  130870 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-125868/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca-key.pem org=jenkins.addons-301052 san=[127.0.0.1 192.168.39.103 addons-301052 localhost minikube]
	I1208 03:40:08.275465  130870 provision.go:177] copyRemoteCerts
	I1208 03:40:08.275525  130870 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 03:40:08.277996  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.278358  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:08.278379  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.278510  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:08.365089  130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 03:40:08.416344  130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 03:40:08.446154  130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1208 03:40:08.476033  130870 provision.go:87] duration metric: took 235.824192ms to configureAuth
	I1208 03:40:08.476077  130870 buildroot.go:189] setting minikube options for container-runtime
	I1208 03:40:08.476284  130870 config.go:182] Loaded profile config "addons-301052": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 03:40:08.479019  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.479528  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:08.479559  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.479781  130870 main.go:143] libmachine: Using SSH client type: native
	I1208 03:40:08.480090  130870 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1208 03:40:08.480117  130870 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 03:40:08.727822  130870 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 03:40:08.727846  130870 machine.go:97] duration metric: took 854.230626ms to provisionDockerMachine
	I1208 03:40:08.727858  130870 client.go:176] duration metric: took 16.612624215s to LocalClient.Create
	I1208 03:40:08.727875  130870 start.go:167] duration metric: took 16.612692117s to libmachine.API.Create "addons-301052"
	I1208 03:40:08.727883  130870 start.go:293] postStartSetup for "addons-301052" (driver="kvm2")
	I1208 03:40:08.727892  130870 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 03:40:08.727995  130870 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 03:40:08.731128  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.731543  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:08.731566  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.731728  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:08.817591  130870 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 03:40:08.822566  130870 info.go:137] Remote host: Buildroot 2025.02
	I1208 03:40:08.822612  130870 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-125868/.minikube/addons for local assets ...
	I1208 03:40:08.822730  130870 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-125868/.minikube/files for local assets ...
	I1208 03:40:08.822769  130870 start.go:296] duration metric: took 94.879541ms for postStartSetup
	I1208 03:40:08.825830  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.826277  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:08.826321  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.826561  130870 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/config.json ...
	I1208 03:40:08.826781  130870 start.go:128] duration metric: took 16.713086736s to createHost
	I1208 03:40:08.828818  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.829177  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:08.829202  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.829394  130870 main.go:143] libmachine: Using SSH client type: native
	I1208 03:40:08.829602  130870 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.103 22 <nil> <nil>}
	I1208 03:40:08.829611  130870 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1208 03:40:08.940402  130870 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765165208.895450329
	
	I1208 03:40:08.940451  130870 fix.go:216] guest clock: 1765165208.895450329
	I1208 03:40:08.940464  130870 fix.go:229] Guest: 2025-12-08 03:40:08.895450329 +0000 UTC Remote: 2025-12-08 03:40:08.826795401 +0000 UTC m=+16.814407780 (delta=68.654928ms)
	I1208 03:40:08.940503  130870 fix.go:200] guest clock delta is within tolerance: 68.654928ms
	I1208 03:40:08.940511  130870 start.go:83] releasing machines lock for "addons-301052", held for 16.826915901s
	I1208 03:40:08.943284  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.943694  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:08.943719  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.944188  130870 ssh_runner.go:195] Run: cat /version.json
	I1208 03:40:08.944254  130870 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 03:40:08.946920  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.947186  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.947260  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:08.947290  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.947433  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:08.947602  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:08.947633  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:08.947788  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:09.054357  130870 ssh_runner.go:195] Run: systemctl --version
	I1208 03:40:09.060440  130870 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 03:40:09.217852  130870 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 03:40:09.224236  130870 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 03:40:09.224329  130870 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 03:40:09.243867  130870 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1208 03:40:09.243892  130870 start.go:496] detecting cgroup driver to use...
	I1208 03:40:09.243976  130870 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 03:40:09.262612  130870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 03:40:09.279740  130870 docker.go:218] disabling cri-docker service (if available) ...
	I1208 03:40:09.279811  130870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 03:40:09.297398  130870 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 03:40:09.314260  130870 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 03:40:09.463148  130870 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 03:40:09.667739  130870 docker.go:234] disabling docker service ...
	I1208 03:40:09.667825  130870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 03:40:09.683863  130870 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 03:40:09.699137  130870 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 03:40:09.863751  130870 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 03:40:10.003669  130870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 03:40:10.019046  130870 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 03:40:10.041047  130870 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 03:40:10.041112  130870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 03:40:10.053319  130870 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 03:40:10.053394  130870 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 03:40:10.065972  130870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 03:40:10.078708  130870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 03:40:10.091330  130870 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 03:40:10.104664  130870 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 03:40:10.117520  130870 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 03:40:10.138486  130870 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 03:40:10.150598  130870 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 03:40:10.160961  130870 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1208 03:40:10.161020  130870 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1208 03:40:10.181340  130870 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 03:40:10.193118  130870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 03:40:10.333205  130870 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 03:40:10.447949  130870 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 03:40:10.448058  130870 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 03:40:10.453639  130870 start.go:564] Will wait 60s for crictl version
	I1208 03:40:10.453738  130870 ssh_runner.go:195] Run: which crictl
	I1208 03:40:10.457693  130870 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1208 03:40:10.492113  130870 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1208 03:40:10.492247  130870 ssh_runner.go:195] Run: crio --version
	I1208 03:40:10.521693  130870 ssh_runner.go:195] Run: crio --version
	I1208 03:40:10.554101  130870 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1208 03:40:10.558138  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:10.558578  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:10.558605  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:10.558841  130870 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1208 03:40:10.563488  130870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 03:40:10.578123  130870 kubeadm.go:884] updating cluster {Name:addons-301052 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-301052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 03:40:10.578266  130870 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 03:40:10.578314  130870 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 03:40:10.607363  130870 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1208 03:40:10.607445  130870 ssh_runner.go:195] Run: which lz4
	I1208 03:40:10.611490  130870 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1208 03:40:10.616298  130870 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1208 03:40:10.616340  130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1208 03:40:11.813534  130870 crio.go:462] duration metric: took 1.2020774s to copy over tarball
	I1208 03:40:11.813639  130870 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1208 03:40:13.246359  130870 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.43268676s)
	I1208 03:40:13.246394  130870 crio.go:469] duration metric: took 1.432824376s to extract the tarball
	I1208 03:40:13.246402  130870 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1208 03:40:13.283222  130870 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 03:40:13.326298  130870 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 03:40:13.326333  130870 cache_images.go:86] Images are preloaded, skipping loading
	I1208 03:40:13.326344  130870 kubeadm.go:935] updating node { 192.168.39.103 8443 v1.34.2 crio true true} ...
	I1208 03:40:13.326476  130870 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-301052 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.103
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-301052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 03:40:13.326548  130870 ssh_runner.go:195] Run: crio config
	I1208 03:40:13.373243  130870 cni.go:84] Creating CNI manager for ""
	I1208 03:40:13.373279  130870 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 03:40:13.373300  130870 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 03:40:13.373324  130870 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.103 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-301052 NodeName:addons-301052 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 03:40:13.373448  130870 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.103
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-301052"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.103"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.103"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 03:40:13.373536  130870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 03:40:13.385689  130870 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 03:40:13.385776  130870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 03:40:13.397925  130870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1208 03:40:13.418234  130870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 03:40:13.439356  130870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1208 03:40:13.460334  130870 ssh_runner.go:195] Run: grep 192.168.39.103	control-plane.minikube.internal$ /etc/hosts
	I1208 03:40:13.464846  130870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.103	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 03:40:13.479657  130870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 03:40:13.617685  130870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 03:40:13.636568  130870 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052 for IP: 192.168.39.103
	I1208 03:40:13.636599  130870 certs.go:195] generating shared ca certs ...
	I1208 03:40:13.636616  130870 certs.go:227] acquiring lock for ca certs: {Name:mkde290f016452b47757f4047e34e65b6d895da1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 03:40:13.636761  130870 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-125868/.minikube/ca.key
	I1208 03:40:13.702170  130870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-125868/.minikube/ca.crt ...
	I1208 03:40:13.702198  130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/ca.crt: {Name:mke87be34c5c596f3cd382ba989ad1fa916992a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 03:40:13.702380  130870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-125868/.minikube/ca.key ...
	I1208 03:40:13.702391  130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/ca.key: {Name:mkb2ba9e512a7a853703c882645570892099bd39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 03:40:13.702487  130870 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-125868/.minikube/proxy-client-ca.key
	I1208 03:40:13.788118  130870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-125868/.minikube/proxy-client-ca.crt ...
	I1208 03:40:13.788156  130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/proxy-client-ca.crt: {Name:mk5f661ce8f8fdbed090c902672a423b18fef9cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 03:40:13.788345  130870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-125868/.minikube/proxy-client-ca.key ...
	I1208 03:40:13.788357  130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/proxy-client-ca.key: {Name:mk950c67bafa3f05c0edc38ab8b6f5935245787f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 03:40:13.788428  130870 certs.go:257] generating profile certs ...
	I1208 03:40:13.788494  130870 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.key
	I1208 03:40:13.788508  130870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt with IP's: []
	I1208 03:40:13.890840  130870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt ...
	I1208 03:40:13.890870  130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: {Name:mk314789026b1cc69b0fe3b0cb95d601a54847f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 03:40:13.891049  130870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.key ...
	I1208 03:40:13.891061  130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.key: {Name:mk548c638f4510ca3c75d31fcb5f5d337a799c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 03:40:13.891132  130870 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.key.5f15a724
	I1208 03:40:13.891152  130870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.crt.5f15a724 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.103]
	I1208 03:40:13.921322  130870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.crt.5f15a724 ...
	I1208 03:40:13.921353  130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.crt.5f15a724: {Name:mk3ca22ef41f82bdb96104cf5305fd506689b74e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 03:40:13.922061  130870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.key.5f15a724 ...
	I1208 03:40:13.922082  130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.key.5f15a724: {Name:mk3f059544f35a29b9c00dbddf8421936a1654af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 03:40:13.922639  130870 certs.go:382] copying /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.crt.5f15a724 -> /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.crt
	I1208 03:40:13.922723  130870 certs.go:386] copying /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.key.5f15a724 -> /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.key
	I1208 03:40:13.922775  130870 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/proxy-client.key
	I1208 03:40:13.922795  130870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/proxy-client.crt with IP's: []
	I1208 03:40:14.062021  130870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/proxy-client.crt ...
	I1208 03:40:14.062055  130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/proxy-client.crt: {Name:mk5cb75985139d01d8a0bdf7fa4fb3424ce2f6b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 03:40:14.062233  130870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/proxy-client.key ...
	I1208 03:40:14.062247  130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/proxy-client.key: {Name:mk3edfbda303f1b4afd4cf4b34ecda448800bb94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 03:40:14.062415  130870 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 03:40:14.062457  130870 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca.pem (1078 bytes)
	I1208 03:40:14.062519  130870 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/cert.pem (1123 bytes)
	I1208 03:40:14.062552  130870 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/key.pem (1675 bytes)
	I1208 03:40:14.063273  130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 03:40:14.094648  130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1208 03:40:14.129045  130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 03:40:14.161155  130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 03:40:14.192130  130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1208 03:40:14.224112  130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 03:40:14.254590  130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 03:40:14.285165  130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1208 03:40:14.321442  130870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 03:40:14.360440  130870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 03:40:14.389467  130870 ssh_runner.go:195] Run: openssl version
	I1208 03:40:14.396097  130870 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 03:40:14.407873  130870 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 03:40:14.419479  130870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 03:40:14.424673  130870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 03:40 /usr/share/ca-certificates/minikubeCA.pem
	I1208 03:40:14.424739  130870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 03:40:14.432443  130870 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 03:40:14.444761  130870 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 03:40:14.456820  130870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 03:40:14.461883  130870 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1208 03:40:14.461984  130870 kubeadm.go:401] StartCluster: {Name:addons-301052 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-301052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 03:40:14.462075  130870 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 03:40:14.462135  130870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 03:40:14.496924  130870 cri.go:89] found id: ""
	I1208 03:40:14.497016  130870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 03:40:14.509502  130870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 03:40:14.521606  130870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 03:40:14.533479  130870 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 03:40:14.533500  130870 kubeadm.go:158] found existing configuration files:
	
	I1208 03:40:14.533548  130870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 03:40:14.544943  130870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 03:40:14.545005  130870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 03:40:14.556609  130870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 03:40:14.567391  130870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 03:40:14.567450  130870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 03:40:14.579641  130870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 03:40:14.590979  130870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 03:40:14.591042  130870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 03:40:14.603082  130870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 03:40:14.614391  130870 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 03:40:14.614453  130870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 03:40:14.626517  130870 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1208 03:40:14.675560  130870 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1208 03:40:14.675629  130870 kubeadm.go:319] [preflight] Running pre-flight checks
	I1208 03:40:14.768775  130870 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1208 03:40:14.768951  130870 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1208 03:40:14.769092  130870 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1208 03:40:14.780110  130870 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1208 03:40:14.784887  130870 out.go:252]   - Generating certificates and keys ...
	I1208 03:40:14.785075  130870 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1208 03:40:14.785167  130870 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1208 03:40:15.041717  130870 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1208 03:40:15.194374  130870 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1208 03:40:15.337015  130870 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1208 03:40:16.120015  130870 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1208 03:40:16.201047  130870 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1208 03:40:16.201430  130870 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-301052 localhost] and IPs [192.168.39.103 127.0.0.1 ::1]
	I1208 03:40:16.312733  130870 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1208 03:40:16.312888  130870 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-301052 localhost] and IPs [192.168.39.103 127.0.0.1 ::1]
	I1208 03:40:16.385567  130870 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1208 03:40:16.668853  130870 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1208 03:40:16.696797  130870 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1208 03:40:16.696919  130870 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1208 03:40:16.867880  130870 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1208 03:40:16.937367  130870 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1208 03:40:17.463543  130870 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1208 03:40:17.711004  130870 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1208 03:40:17.793853  130870 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1208 03:40:17.795737  130870 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1208 03:40:17.798386  130870 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1208 03:40:17.799961  130870 out.go:252]   - Booting up control plane ...
	I1208 03:40:17.800075  130870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1208 03:40:17.800206  130870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1208 03:40:17.801028  130870 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1208 03:40:17.825288  130870 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1208 03:40:17.825451  130870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1208 03:40:17.832098  130870 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1208 03:40:17.832303  130870 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1208 03:40:17.832486  130870 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1208 03:40:18.001833  130870 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1208 03:40:18.002037  130870 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1208 03:40:19.504388  130870 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50496703s
	I1208 03:40:19.510832  130870 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1208 03:40:19.511039  130870 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.103:8443/livez
	I1208 03:40:19.511156  130870 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1208 03:40:19.511307  130870 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1208 03:40:21.957202  130870 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.448458106s
	I1208 03:40:23.441841  130870 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.934618079s
	I1208 03:40:26.508943  130870 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.003890572s
	I1208 03:40:26.530740  130870 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1208 03:40:26.547786  130870 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1208 03:40:26.563795  130870 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1208 03:40:26.563991  130870 kubeadm.go:319] [mark-control-plane] Marking the node addons-301052 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1208 03:40:26.578781  130870 kubeadm.go:319] [bootstrap-token] Using token: 8vbi5u.3kekmhk202vogjki
	I1208 03:40:26.579989  130870 out.go:252]   - Configuring RBAC rules ...
	I1208 03:40:26.580100  130870 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1208 03:40:26.587504  130870 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1208 03:40:26.597022  130870 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1208 03:40:26.601083  130870 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1208 03:40:26.607677  130870 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1208 03:40:26.614614  130870 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1208 03:40:26.918004  130870 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1208 03:40:27.375055  130870 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1208 03:40:27.916885  130870 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1208 03:40:27.917938  130870 kubeadm.go:319] 
	I1208 03:40:27.918004  130870 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1208 03:40:27.918011  130870 kubeadm.go:319] 
	I1208 03:40:27.918086  130870 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1208 03:40:27.918096  130870 kubeadm.go:319] 
	I1208 03:40:27.918130  130870 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1208 03:40:27.918245  130870 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1208 03:40:27.918306  130870 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1208 03:40:27.918313  130870 kubeadm.go:319] 
	I1208 03:40:27.918359  130870 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1208 03:40:27.918365  130870 kubeadm.go:319] 
	I1208 03:40:27.918412  130870 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1208 03:40:27.918419  130870 kubeadm.go:319] 
	I1208 03:40:27.918482  130870 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1208 03:40:27.918595  130870 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1208 03:40:27.918694  130870 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1208 03:40:27.918709  130870 kubeadm.go:319] 
	I1208 03:40:27.918829  130870 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1208 03:40:27.918956  130870 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1208 03:40:27.918968  130870 kubeadm.go:319] 
	I1208 03:40:27.919077  130870 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8vbi5u.3kekmhk202vogjki \
	I1208 03:40:27.919230  130870 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0cd0b3eff0e159b3979f70cb18b8d13b2d72ebd098bd90cdc70e035975d60cfd \
	I1208 03:40:27.919256  130870 kubeadm.go:319] 	--control-plane 
	I1208 03:40:27.919264  130870 kubeadm.go:319] 
	I1208 03:40:27.919370  130870 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1208 03:40:27.919385  130870 kubeadm.go:319] 
	I1208 03:40:27.919506  130870 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8vbi5u.3kekmhk202vogjki \
	I1208 03:40:27.919638  130870 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:0cd0b3eff0e159b3979f70cb18b8d13b2d72ebd098bd90cdc70e035975d60cfd 
	I1208 03:40:27.921388  130870 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1208 03:40:27.921428  130870 cni.go:84] Creating CNI manager for ""
	I1208 03:40:27.921437  130870 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 03:40:27.923121  130870 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1208 03:40:27.924486  130870 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1208 03:40:27.937429  130870 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1208 03:40:27.959832  130870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1208 03:40:27.959963  130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 03:40:27.959965  130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-301052 minikube.k8s.io/updated_at=2025_12_08T03_40_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=730a0938e5fe3e95dced085e5e597b4345feecad minikube.k8s.io/name=addons-301052 minikube.k8s.io/primary=true
	I1208 03:40:28.107276  130870 ops.go:34] apiserver oom_adj: -16
	I1208 03:40:28.107331  130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 03:40:28.607657  130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 03:40:29.108281  130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 03:40:29.607404  130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 03:40:30.107683  130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 03:40:30.607712  130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 03:40:31.108352  130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 03:40:31.608200  130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 03:40:32.108330  130870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1208 03:40:32.203135  130870 kubeadm.go:1114] duration metric: took 4.243272452s to wait for elevateKubeSystemPrivileges
	I1208 03:40:32.203186  130870 kubeadm.go:403] duration metric: took 17.741209566s to StartCluster
	I1208 03:40:32.203214  130870 settings.go:142] acquiring lock: {Name:mk8cd1e38ee853efa0b11d6abb3aeb99916975f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 03:40:32.203995  130870 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-125868/kubeconfig
	I1208 03:40:32.204439  130870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/kubeconfig: {Name:mk83f735c71f0681683d120e6684a264c50ab0a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 03:40:32.205164  130870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1208 03:40:32.205189  130870 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.103 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 03:40:32.205276  130870 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1208 03:40:32.205398  130870 config.go:182] Loaded profile config "addons-301052": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 03:40:32.205420  130870 addons.go:70] Setting registry-creds=true in profile "addons-301052"
	I1208 03:40:32.205424  130870 addons.go:70] Setting registry=true in profile "addons-301052"
	I1208 03:40:32.205424  130870 addons.go:70] Setting gcp-auth=true in profile "addons-301052"
	I1208 03:40:32.205406  130870 addons.go:70] Setting yakd=true in profile "addons-301052"
	I1208 03:40:32.205454  130870 addons.go:239] Setting addon registry=true in "addons-301052"
	I1208 03:40:32.205464  130870 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-301052"
	I1208 03:40:32.205476  130870 addons.go:70] Setting volcano=true in profile "addons-301052"
	I1208 03:40:32.205468  130870 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-301052"
	I1208 03:40:32.205489  130870 addons.go:239] Setting addon volcano=true in "addons-301052"
	I1208 03:40:32.205489  130870 addons.go:70] Setting default-storageclass=true in profile "addons-301052"
	I1208 03:40:32.205497  130870 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-301052"
	I1208 03:40:32.205501  130870 addons.go:70] Setting ingress=true in profile "addons-301052"
	I1208 03:40:32.205506  130870 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-301052"
	I1208 03:40:32.205519  130870 addons.go:70] Setting ingress-dns=true in profile "addons-301052"
	I1208 03:40:32.205529  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:32.205457  130870 mustload.go:66] Loading cluster: addons-301052
	I1208 03:40:32.205544  130870 addons.go:70] Setting storage-provisioner=true in profile "addons-301052"
	I1208 03:40:32.205565  130870 addons.go:239] Setting addon storage-provisioner=true in "addons-301052"
	I1208 03:40:32.205581  130870 addons.go:70] Setting cloud-spanner=true in profile "addons-301052"
	I1208 03:40:32.205481  130870 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-301052"
	I1208 03:40:32.205608  130870 addons.go:239] Setting addon cloud-spanner=true in "addons-301052"
	I1208 03:40:32.205619  130870 addons.go:70] Setting metrics-server=true in profile "addons-301052"
	I1208 03:40:32.205630  130870 addons.go:239] Setting addon metrics-server=true in "addons-301052"
	I1208 03:40:32.205650  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:32.205661  130870 addons.go:70] Setting volumesnapshots=true in profile "addons-301052"
	I1208 03:40:32.205674  130870 addons.go:239] Setting addon volumesnapshots=true in "addons-301052"
	I1208 03:40:32.205696  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:32.205764  130870 config.go:182] Loaded profile config "addons-301052": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 03:40:32.206048  130870 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-301052"
	I1208 03:40:32.206073  130870 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-301052"
	I1208 03:40:32.206102  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:32.205466  130870 addons.go:239] Setting addon yakd=true in "addons-301052"
	I1208 03:40:32.206303  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:32.205527  130870 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-301052"
	I1208 03:40:32.206848  130870 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-301052"
	I1208 03:40:32.205535  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:32.205466  130870 addons.go:239] Setting addon registry-creds=true in "addons-301052"
	I1208 03:40:32.206920  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:32.205512  130870 addons.go:239] Setting addon ingress=true in "addons-301052"
	I1208 03:40:32.207064  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:32.207203  130870 out.go:179] * Verifying Kubernetes components...
	I1208 03:40:32.205650  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:32.205613  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:32.205491  130870 addons.go:70] Setting inspektor-gadget=true in profile "addons-301052"
	I1208 03:40:32.205535  130870 addons.go:239] Setting addon ingress-dns=true in "addons-301052"
	I1208 03:40:32.206877  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:32.205594  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:32.208047  130870 addons.go:239] Setting addon inspektor-gadget=true in "addons-301052"
	I1208 03:40:32.208106  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:32.208325  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:32.209819  130870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 03:40:32.211780  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:32.214258  130870 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-301052"
	I1208 03:40:32.214329  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:32.214359  130870 addons.go:239] Setting addon default-storageclass=true in "addons-301052"
	I1208 03:40:32.214400  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:32.214616  130870 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1208 03:40:32.214654  130870 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W1208 03:40:32.214724  130870 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1208 03:40:32.215767  130870 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1208 03:40:32.216373  130870 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1208 03:40:32.216379  130870 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1208 03:40:32.216415  130870 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1208 03:40:32.216733  130870 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1208 03:40:32.216428  130870 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1208 03:40:32.216453  130870 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1208 03:40:32.217305  130870 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1208 03:40:32.217980  130870 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1208 03:40:32.218076  130870 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1208 03:40:32.218595  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1208 03:40:32.218693  130870 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1208 03:40:32.218722  130870 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1208 03:40:32.218725  130870 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1208 03:40:32.219099  130870 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1208 03:40:32.218728  130870 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 03:40:32.218731  130870 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1208 03:40:32.219319  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1208 03:40:32.219329  130870 out.go:179]   - Using image docker.io/registry:3.0.0
	I1208 03:40:32.219340  130870 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1208 03:40:32.219341  130870 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1208 03:40:32.219482  130870 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1208 03:40:32.220018  130870 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1208 03:40:32.220623  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1208 03:40:32.220650  130870 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1208 03:40:32.220665  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1208 03:40:32.220234  130870 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 03:40:32.221055  130870 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 03:40:32.220670  130870 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 03:40:32.221188  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 03:40:32.221279  130870 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1208 03:40:32.221328  130870 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1208 03:40:32.221570  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1208 03:40:32.221333  130870 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1208 03:40:32.221666  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1208 03:40:32.222002  130870 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1208 03:40:32.222004  130870 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1208 03:40:32.222065  130870 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1208 03:40:32.222346  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1208 03:40:32.224204  130870 out.go:179]   - Using image docker.io/busybox:stable
	I1208 03:40:32.224214  130870 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1208 03:40:32.224206  130870 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1208 03:40:32.225357  130870 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1208 03:40:32.225371  130870 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1208 03:40:32.225399  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1208 03:40:32.225423  130870 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1208 03:40:32.225440  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1208 03:40:32.227514  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.227911  130870 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1208 03:40:32.228813  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.229950  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:32.229996  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.230710  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.230724  130870 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1208 03:40:32.231008  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:32.231056  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:32.231106  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.231824  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:32.231922  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.232374  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.232831  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:32.232875  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.233273  130870 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1208 03:40:32.233689  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:32.233807  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:32.233842  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.234087  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.234285  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:32.234335  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.234401  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.234580  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:32.235195  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:32.235610  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.235661  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.235711  130870 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1208 03:40:32.235758  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:32.235785  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.236320  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:32.236530  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.236616  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.236661  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:32.236712  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.236826  130870 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1208 03:40:32.236859  130870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1208 03:40:32.236891  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:32.236945  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.237074  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.237127  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:32.237157  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.237283  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:32.237591  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:32.237867  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:32.237917  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:32.237953  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.237969  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:32.237998  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.238282  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:32.238412  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.238796  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:32.238832  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.238823  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:32.239076  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:32.239131  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.239678  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:32.239710  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.239752  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:32.239795  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.239916  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:32.240187  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:32.241936  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.242386  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:32.242424  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:32.242644  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	W1208 03:40:32.438639  130870 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:49342->192.168.39.103:22: read: connection reset by peer
	I1208 03:40:32.438688  130870 retry.go:31] will retry after 312.584824ms: ssh: handshake failed: read tcp 192.168.39.1:49342->192.168.39.103:22: read: connection reset by peer
	W1208 03:40:32.451493  130870 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:49364->192.168.39.103:22: read: connection reset by peer
	I1208 03:40:32.451536  130870 retry.go:31] will retry after 275.869476ms: ssh: handshake failed: read tcp 192.168.39.1:49364->192.168.39.103:22: read: connection reset by peer
	I1208 03:40:32.743413  130870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 03:40:32.743486  130870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1208 03:40:32.903654  130870 node_ready.go:35] waiting up to 6m0s for node "addons-301052" to be "Ready" ...
	I1208 03:40:32.910215  130870 node_ready.go:49] node "addons-301052" is "Ready"
	I1208 03:40:32.910251  130870 node_ready.go:38] duration metric: took 6.557861ms for node "addons-301052" to be "Ready" ...
	I1208 03:40:32.910268  130870 api_server.go:52] waiting for apiserver process to appear ...
	I1208 03:40:32.910329  130870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 03:40:32.931744  130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1208 03:40:32.936423  130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 03:40:32.969859  130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1208 03:40:32.982756  130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1208 03:40:32.998149  130870 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1208 03:40:32.998178  130870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1208 03:40:33.000573  130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 03:40:33.006955  130870 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1208 03:40:33.006997  130870 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1208 03:40:33.014001  130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1208 03:40:33.029790  130870 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1208 03:40:33.029834  130870 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1208 03:40:33.051167  130870 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1208 03:40:33.051198  130870 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1208 03:40:33.068357  130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1208 03:40:33.082300  130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1208 03:40:33.179255  130870 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1208 03:40:33.179282  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1208 03:40:33.275808  130870 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1208 03:40:33.275838  130870 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1208 03:40:33.281466  130870 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1208 03:40:33.281490  130870 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1208 03:40:33.285887  130870 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1208 03:40:33.285922  130870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1208 03:40:33.296230  130870 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1208 03:40:33.296254  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1208 03:40:33.307094  130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1208 03:40:33.319025  130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1208 03:40:33.514770  130870 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1208 03:40:33.514804  130870 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1208 03:40:33.564857  130870 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1208 03:40:33.564886  130870 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1208 03:40:33.573720  130870 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1208 03:40:33.573754  130870 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1208 03:40:33.579531  130870 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1208 03:40:33.579563  130870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1208 03:40:33.584438  130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1208 03:40:33.918356  130870 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1208 03:40:33.918415  130870 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1208 03:40:33.987365  130870 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1208 03:40:33.987393  130870 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1208 03:40:34.025815  130870 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1208 03:40:34.025844  130870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1208 03:40:34.032583  130870 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1208 03:40:34.032610  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1208 03:40:34.474803  130870 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1208 03:40:34.474838  130870 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1208 03:40:34.506692  130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1208 03:40:34.605794  130870 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 03:40:34.605819  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1208 03:40:34.620876  130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1208 03:40:35.104082  130870 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1208 03:40:35.104109  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1208 03:40:35.229940  130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 03:40:35.574659  130870 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.664292162s)
	I1208 03:40:35.574711  130870 api_server.go:72] duration metric: took 3.369490663s to wait for apiserver process to appear ...
	I1208 03:40:35.574718  130870 api_server.go:88] waiting for apiserver healthz status ...
	I1208 03:40:35.574758  130870 api_server.go:253] Checking apiserver healthz at https://192.168.39.103:8443/healthz ...
	I1208 03:40:35.575015  130870 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.831498967s)
	I1208 03:40:35.575047  130870 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1208 03:40:35.611851  130870 api_server.go:279] https://192.168.39.103:8443/healthz returned 200:
	ok
	I1208 03:40:35.621404  130870 api_server.go:141] control plane version: v1.34.2
	I1208 03:40:35.621437  130870 api_server.go:131] duration metric: took 46.710506ms to wait for apiserver health ...
	I1208 03:40:35.621447  130870 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 03:40:35.717605  130870 system_pods.go:59] 10 kube-system pods found
	I1208 03:40:35.717648  130870 system_pods.go:61] "amd-gpu-device-plugin-mn6gz" [34c7c111-d878-4bea-8f1c-64b08778e73b] Pending
	I1208 03:40:35.717658  130870 system_pods.go:61] "coredns-66bc5c9577-wx9fk" [dac1139c-e1e9-46d1-9ba7-0d171fde95a2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 03:40:35.717665  130870 system_pods.go:61] "coredns-66bc5c9577-z7cr6" [f39adb02-b124-455f-b9aa-bb4f34c022f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 03:40:35.717671  130870 system_pods.go:61] "etcd-addons-301052" [f848e815-7dfb-410e-9326-db452be103d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 03:40:35.717676  130870 system_pods.go:61] "kube-apiserver-addons-301052" [73a6eccb-55a9-46a4-a3e7-b9f83fe33aad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 03:40:35.717680  130870 system_pods.go:61] "kube-controller-manager-addons-301052" [deae2e02-cb37-4628-aa9d-6ee9a4756a1b] Running
	I1208 03:40:35.717688  130870 system_pods.go:61] "kube-proxy-7c4kr" [7d58c183-e8c6-40c7-8fb9-3cc7bbc35eef] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1208 03:40:35.717698  130870 system_pods.go:61] "kube-scheduler-addons-301052" [bf8d885f-4cfd-4977-92b0-5afc4838c1fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 03:40:35.717704  130870 system_pods.go:61] "nvidia-device-plugin-daemonset-fvs49" [c8a1320a-c4e2-4f2a-be37-56390e503e79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1208 03:40:35.717712  130870 system_pods.go:61] "registry-creds-764b6fb674-f84c9" [c16c8605-5ed0-4a5b-9291-123778fc160f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 03:40:35.717721  130870 system_pods.go:74] duration metric: took 96.267166ms to wait for pod list to return data ...
	I1208 03:40:35.717733  130870 default_sa.go:34] waiting for default service account to be created ...
	I1208 03:40:35.751354  130870 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1208 03:40:35.751395  130870 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1208 03:40:35.888471  130870 default_sa.go:45] found service account: "default"
	I1208 03:40:35.888515  130870 default_sa.go:55] duration metric: took 170.774225ms for default service account to be created ...
	I1208 03:40:35.888533  130870 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 03:40:36.011976  130870 system_pods.go:86] 10 kube-system pods found
	I1208 03:40:36.012015  130870 system_pods.go:89] "amd-gpu-device-plugin-mn6gz" [34c7c111-d878-4bea-8f1c-64b08778e73b] Pending
	I1208 03:40:36.012026  130870 system_pods.go:89] "coredns-66bc5c9577-wx9fk" [dac1139c-e1e9-46d1-9ba7-0d171fde95a2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 03:40:36.012037  130870 system_pods.go:89] "coredns-66bc5c9577-z7cr6" [f39adb02-b124-455f-b9aa-bb4f34c022f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 03:40:36.012048  130870 system_pods.go:89] "etcd-addons-301052" [f848e815-7dfb-410e-9326-db452be103d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 03:40:36.012064  130870 system_pods.go:89] "kube-apiserver-addons-301052" [73a6eccb-55a9-46a4-a3e7-b9f83fe33aad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 03:40:36.012070  130870 system_pods.go:89] "kube-controller-manager-addons-301052" [deae2e02-cb37-4628-aa9d-6ee9a4756a1b] Running
	I1208 03:40:36.012081  130870 system_pods.go:89] "kube-proxy-7c4kr" [7d58c183-e8c6-40c7-8fb9-3cc7bbc35eef] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1208 03:40:36.012088  130870 system_pods.go:89] "kube-scheduler-addons-301052" [bf8d885f-4cfd-4977-92b0-5afc4838c1fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 03:40:36.012101  130870 system_pods.go:89] "nvidia-device-plugin-daemonset-fvs49" [c8a1320a-c4e2-4f2a-be37-56390e503e79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1208 03:40:36.012113  130870 system_pods.go:89] "registry-creds-764b6fb674-f84c9" [c16c8605-5ed0-4a5b-9291-123778fc160f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 03:40:36.012133  130870 retry.go:31] will retry after 252.20739ms: missing components: kube-proxy
	I1208 03:40:36.144047  130870 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-301052" context rescaled to 1 replicas
	I1208 03:40:36.269064  130870 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1208 03:40:36.269099  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1208 03:40:36.310617  130870 system_pods.go:86] 10 kube-system pods found
	I1208 03:40:36.310654  130870 system_pods.go:89] "amd-gpu-device-plugin-mn6gz" [34c7c111-d878-4bea-8f1c-64b08778e73b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1208 03:40:36.310662  130870 system_pods.go:89] "coredns-66bc5c9577-wx9fk" [dac1139c-e1e9-46d1-9ba7-0d171fde95a2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 03:40:36.310670  130870 system_pods.go:89] "coredns-66bc5c9577-z7cr6" [f39adb02-b124-455f-b9aa-bb4f34c022f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 03:40:36.310675  130870 system_pods.go:89] "etcd-addons-301052" [f848e815-7dfb-410e-9326-db452be103d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 03:40:36.310682  130870 system_pods.go:89] "kube-apiserver-addons-301052" [73a6eccb-55a9-46a4-a3e7-b9f83fe33aad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 03:40:36.310692  130870 system_pods.go:89] "kube-controller-manager-addons-301052" [deae2e02-cb37-4628-aa9d-6ee9a4756a1b] Running
	I1208 03:40:36.310700  130870 system_pods.go:89] "kube-proxy-7c4kr" [7d58c183-e8c6-40c7-8fb9-3cc7bbc35eef] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1208 03:40:36.310708  130870 system_pods.go:89] "kube-scheduler-addons-301052" [bf8d885f-4cfd-4977-92b0-5afc4838c1fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 03:40:36.310717  130870 system_pods.go:89] "nvidia-device-plugin-daemonset-fvs49" [c8a1320a-c4e2-4f2a-be37-56390e503e79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1208 03:40:36.310738  130870 system_pods.go:89] "registry-creds-764b6fb674-f84c9" [c16c8605-5ed0-4a5b-9291-123778fc160f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 03:40:36.310759  130870 retry.go:31] will retry after 364.474332ms: missing components: kube-proxy
	I1208 03:40:36.689374  130870 system_pods.go:86] 10 kube-system pods found
	I1208 03:40:36.689408  130870 system_pods.go:89] "amd-gpu-device-plugin-mn6gz" [34c7c111-d878-4bea-8f1c-64b08778e73b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1208 03:40:36.689416  130870 system_pods.go:89] "coredns-66bc5c9577-wx9fk" [dac1139c-e1e9-46d1-9ba7-0d171fde95a2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 03:40:36.689425  130870 system_pods.go:89] "coredns-66bc5c9577-z7cr6" [f39adb02-b124-455f-b9aa-bb4f34c022f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 03:40:36.689429  130870 system_pods.go:89] "etcd-addons-301052" [f848e815-7dfb-410e-9326-db452be103d9] Running
	I1208 03:40:36.689436  130870 system_pods.go:89] "kube-apiserver-addons-301052" [73a6eccb-55a9-46a4-a3e7-b9f83fe33aad] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 03:40:36.689442  130870 system_pods.go:89] "kube-controller-manager-addons-301052" [deae2e02-cb37-4628-aa9d-6ee9a4756a1b] Running
	I1208 03:40:36.689447  130870 system_pods.go:89] "kube-proxy-7c4kr" [7d58c183-e8c6-40c7-8fb9-3cc7bbc35eef] Running
	I1208 03:40:36.689454  130870 system_pods.go:89] "kube-scheduler-addons-301052" [bf8d885f-4cfd-4977-92b0-5afc4838c1fe] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 03:40:36.689468  130870 system_pods.go:89] "nvidia-device-plugin-daemonset-fvs49" [c8a1320a-c4e2-4f2a-be37-56390e503e79] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1208 03:40:36.689484  130870 system_pods.go:89] "registry-creds-764b6fb674-f84c9" [c16c8605-5ed0-4a5b-9291-123778fc160f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1208 03:40:36.689496  130870 system_pods.go:126] duration metric: took 800.953885ms to wait for k8s-apps to be running ...
	I1208 03:40:36.689506  130870 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 03:40:36.689582  130870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 03:40:36.848488  130870 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1208 03:40:36.848525  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1208 03:40:37.396372  130870 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1208 03:40:37.396405  130870 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1208 03:40:37.649540  130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1208 03:40:38.642841  130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.706379695s)
	I1208 03:40:38.645312  130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.713524243s)
	I1208 03:40:38.935296  130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.965393864s)
	I1208 03:40:38.935377  130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.952586125s)
	I1208 03:40:39.331359  130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.330751583s)
	I1208 03:40:39.331497  130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (6.317452994s)
	I1208 03:40:39.331546  130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.26315561s)
	I1208 03:40:39.331577  130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.249249786s)
	I1208 03:40:39.669370  130870 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1208 03:40:39.672433  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:39.672990  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:39.673025  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:39.673242  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:40.193825  130870 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1208 03:40:40.447167  130870 addons.go:239] Setting addon gcp-auth=true in "addons-301052"
	I1208 03:40:40.447248  130870 host.go:66] Checking if "addons-301052" exists ...
	I1208 03:40:40.449464  130870 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1208 03:40:40.452175  130870 main.go:143] libmachine: domain addons-301052 has defined MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:40.452689  130870 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:58:bd:9c", ip: ""} in network mk-addons-301052: {Iface:virbr1 ExpiryTime:2025-12-08 04:40:07 +0000 UTC Type:0 Mac:52:54:00:58:bd:9c Iaid: IPaddr:192.168.39.103 Prefix:24 Hostname:addons-301052 Clientid:01:52:54:00:58:bd:9c}
	I1208 03:40:40.452727  130870 main.go:143] libmachine: domain addons-301052 has defined IP address 192.168.39.103 and MAC address 52:54:00:58:bd:9c in network mk-addons-301052
	I1208 03:40:40.452912  130870 sshutil.go:53] new ssh client: &{IP:192.168.39.103 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/addons-301052/id_rsa Username:docker}
	I1208 03:40:41.774577  130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.467431785s)
	I1208 03:40:41.774627  130870 addons.go:495] Verifying addon ingress=true in "addons-301052"
	I1208 03:40:41.774669  130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (8.455603533s)
	I1208 03:40:41.774717  130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.190248379s)
	I1208 03:40:41.774828  130870 addons.go:495] Verifying addon registry=true in "addons-301052"
	I1208 03:40:41.774748  130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.268024323s)
	I1208 03:40:41.774806  130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.153903517s)
	I1208 03:40:41.775449  130870 addons.go:495] Verifying addon metrics-server=true in "addons-301052"
	I1208 03:40:41.776235  130870 out.go:179] * Verifying ingress addon...
	I1208 03:40:41.776892  130870 out.go:179] * Verifying registry addon...
	I1208 03:40:41.776892  130870 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-301052 service yakd-dashboard -n yakd-dashboard
	
	I1208 03:40:41.778717  130870 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1208 03:40:41.779589  130870 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1208 03:40:41.813408  130870 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1208 03:40:41.813438  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:41.813659  130870 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1208 03:40:41.813677  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:41.984750  130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.754757799s)
	I1208 03:40:41.984808  130870 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (5.295203418s)
	W1208 03:40:41.984813  130870 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1208 03:40:41.984826  130870 system_svc.go:56] duration metric: took 5.295316828s WaitForService to wait for kubelet
	I1208 03:40:41.984836  130870 retry.go:31] will retry after 209.91093ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1208 03:40:41.984836  130870 kubeadm.go:587] duration metric: took 9.779616044s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 03:40:41.984864  130870 node_conditions.go:102] verifying NodePressure condition ...
	I1208 03:40:42.029586  130870 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1208 03:40:42.029622  130870 node_conditions.go:123] node cpu capacity is 2
	I1208 03:40:42.029643  130870 node_conditions.go:105] duration metric: took 44.773768ms to run NodePressure ...
	I1208 03:40:42.029657  130870 start.go:242] waiting for startup goroutines ...
	I1208 03:40:42.195363  130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1208 03:40:42.289313  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:42.289418  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:42.787777  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:42.790607  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:43.205301  130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.55570028s)
	I1208 03:40:43.205333  130870 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.755837836s)
	I1208 03:40:43.205362  130870 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-301052"
	I1208 03:40:43.206809  130870 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1208 03:40:43.206852  130870 out.go:179] * Verifying csi-hostpath-driver addon...
	I1208 03:40:43.207891  130870 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1208 03:40:43.208801  130870 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1208 03:40:43.208935  130870 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1208 03:40:43.208962  130870 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1208 03:40:43.228431  130870 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1208 03:40:43.228460  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:43.299687  130870 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1208 03:40:43.299713  130870 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1208 03:40:43.308943  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:43.309511  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:43.409291  130870 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1208 03:40:43.409318  130870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1208 03:40:43.506570  130870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1208 03:40:43.712539  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:43.786535  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:43.786536  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:43.910370  130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.714959512s)
	I1208 03:40:44.215449  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:44.284890  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:44.285469  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:44.661533  130870 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.154924671s)
	I1208 03:40:44.662696  130870 addons.go:495] Verifying addon gcp-auth=true in "addons-301052"
	I1208 03:40:44.664177  130870 out.go:179] * Verifying gcp-auth addon...
	I1208 03:40:44.666590  130870 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1208 03:40:44.704578  130870 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1208 03:40:44.704616  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:44.747289  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:44.796969  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:44.796968  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:45.172448  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:45.214376  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:45.290230  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:45.290539  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:45.673420  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:45.713951  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:45.782340  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:45.788443  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:46.172110  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:46.212540  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:46.287677  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:46.290360  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:46.674477  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:46.714112  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:46.791117  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:46.792022  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:47.172300  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:47.216055  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:47.282308  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:47.283884  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:47.670827  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:47.714437  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:47.785132  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:47.786021  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:48.177111  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:48.275725  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:48.283468  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:48.284139  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:48.671468  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:48.771873  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:48.783071  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:48.783174  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:49.175169  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:49.212720  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:49.282764  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:49.283026  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:49.670809  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:49.713684  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:49.783134  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:49.783802  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:50.170365  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:50.213247  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:50.284383  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:50.285989  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:50.671709  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:50.713715  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:50.784350  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:50.784612  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:51.171112  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:51.215001  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:51.287752  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:51.288144  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:51.672418  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:51.714819  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:51.784169  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:51.784354  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:52.171233  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:52.214157  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:52.284112  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:52.287151  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:52.671630  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:52.714329  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:52.783572  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:52.784299  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:53.170282  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:53.213872  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:53.283211  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:53.283324  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:53.671442  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:53.718230  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:53.783203  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:53.784084  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:54.171190  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:54.272667  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:54.283419  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:54.283546  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:54.670750  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:54.713670  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:54.782911  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:54.783283  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:55.170468  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:55.213743  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:55.286206  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:55.287363  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:55.670794  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:55.714921  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:55.782351  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:55.783650  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:56.171553  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:56.213699  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:56.283130  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:56.283167  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:56.672050  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:56.713259  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:56.793129  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:56.793300  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:57.170991  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:57.219503  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:57.284048  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:57.284057  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:57.670477  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:57.714183  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:57.782767  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:57.783582  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:58.170240  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:58.213223  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:58.283784  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:58.285026  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:58.672729  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:58.713028  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:58.785618  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:58.785778  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:59.170247  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:59.213044  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:59.283078  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:40:59.283521  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:59.671276  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:40:59.714399  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:40:59.783445  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:40:59.783603  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:00.170975  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:00.216096  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:00.286297  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:41:00.286401  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:00.670955  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:00.714214  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:00.784351  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:41:00.784484  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:01.170476  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:01.213111  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:01.283462  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:41:01.283813  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:01.670371  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:01.713422  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:01.783488  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:01.784140  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:41:02.169994  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:02.212410  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:02.283599  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:41:02.284673  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:02.670846  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:02.713412  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:02.782244  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:02.783728  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:41:03.170367  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:03.212832  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:03.285353  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:03.286433  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:41:03.670749  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:03.714933  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:03.781721  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:03.783818  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:41:04.170884  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:04.217184  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:04.282243  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:04.285697  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:41:04.670536  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:04.713543  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:04.784340  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:04.784434  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:41:05.172671  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:05.216061  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:05.282697  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:05.282734  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1208 03:41:05.671684  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:05.715015  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:05.783660  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:05.786510  130870 kapi.go:107] duration metric: took 24.006918917s to wait for kubernetes.io/minikube-addons=registry ...
	I1208 03:41:06.172743  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:06.215571  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:06.284552  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:06.672849  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:06.714124  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:06.783415  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:07.170011  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:07.212225  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:07.282939  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:07.670796  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:07.713420  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:07.784562  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:08.170590  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:08.213257  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:08.282488  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:08.670992  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:08.713846  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:08.782167  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:09.171182  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:09.212608  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:09.287380  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:09.672734  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:09.713397  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:09.783642  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:10.173207  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:10.213489  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:10.282301  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:10.672660  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:10.718652  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:10.783579  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:11.174135  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:11.217309  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:11.285166  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:11.671515  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:11.714207  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:11.783167  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:12.173395  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:12.215163  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:12.285555  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:12.670527  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:12.713455  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:12.784770  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:13.170768  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:13.214094  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:13.282894  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:13.670840  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:13.714432  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:13.783389  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:14.172359  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:14.213508  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:14.284624  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:14.674834  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:14.717856  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:14.787806  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:15.171593  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:15.218160  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:15.283279  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:15.671342  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:15.715032  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:15.784552  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:16.292312  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:16.294048  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:16.294357  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:16.674831  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:16.714637  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:16.786378  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:17.170532  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:17.215226  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:17.316145  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:17.670891  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:17.716422  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:17.782830  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:18.170264  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:18.214503  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:18.283877  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:18.670619  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:18.713191  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:18.782643  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:19.169783  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:19.213459  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:19.282826  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:19.672341  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:19.715389  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:19.783267  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:20.171154  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:20.214968  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:20.283770  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:20.671007  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:20.712803  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:20.783887  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:21.170592  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:21.213469  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:21.283165  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:21.670886  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:21.712729  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:21.782747  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:22.170408  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:22.213031  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:22.282344  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:22.671408  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:22.719523  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:22.782187  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:23.173004  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:23.213508  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:23.285374  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:23.672065  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:23.715711  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:23.783810  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:24.176310  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:24.213007  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:24.284400  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:24.672425  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:24.717081  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:24.789385  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:25.170824  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:25.214286  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:25.283012  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:25.670799  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:25.714093  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:25.782871  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:26.171481  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:26.212794  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:26.283913  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:26.670932  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:26.712837  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:26.782983  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:27.170961  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:27.212696  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:27.283245  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:27.670877  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:27.711967  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:27.785415  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:28.172040  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:28.212553  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:28.283744  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:28.674291  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:28.958205  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:28.967183  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:29.170193  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:29.216551  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:29.285092  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:29.672185  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:29.712667  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:29.783193  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:30.170794  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:30.213295  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:30.283794  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:30.670758  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:30.714189  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:30.782702  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:31.170821  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:31.214432  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:31.282563  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:31.672400  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:31.716300  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:31.785586  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:32.172715  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:32.215254  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:32.282677  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:32.671811  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:32.713954  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:32.783631  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:33.169488  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:33.213577  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:33.285197  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:33.673009  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:33.716728  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:33.784694  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:34.171844  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:34.213758  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:34.282911  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:34.674046  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:34.772793  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:34.873305  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:35.185296  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:35.215303  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:35.297468  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:35.673102  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:35.713833  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:35.782942  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:36.172045  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:36.218964  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:36.283567  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:36.671740  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:36.713536  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:36.782468  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:37.219614  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:37.219620  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:37.320025  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:37.670513  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:37.713098  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:37.782283  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:38.174341  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:38.213062  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:38.284279  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:38.671186  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:38.713042  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:38.782278  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:39.171210  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:39.212974  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:39.285081  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:39.673011  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:39.716044  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:39.782432  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:40.170186  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:40.216584  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:40.284687  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:40.675920  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:40.714976  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:40.790509  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:41.179441  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:41.220174  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:41.285379  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:41.675636  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:41.713775  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:41.786187  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:42.174451  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:42.223168  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:42.283877  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:42.671809  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:42.712727  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:42.784361  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:43.170756  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:43.219832  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:43.370851  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:43.671960  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:43.712843  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:43.781724  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:44.170656  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:44.214883  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:44.284613  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:44.672436  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:44.713464  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:44.784576  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:45.173858  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:45.217323  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:45.285232  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:45.671710  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:45.713145  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:45.784839  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:46.171867  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:46.217511  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:46.283707  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:46.672796  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:46.714260  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:46.786739  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:47.173100  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:47.215181  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:47.283487  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:47.672302  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:47.713400  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:47.785002  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:48.173144  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:48.215423  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:48.283991  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:48.671957  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:48.713286  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:48.782554  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:49.171695  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:49.216874  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:49.290998  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:49.671152  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:49.713667  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:49.783853  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:50.172562  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:50.217069  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:50.286103  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:50.671851  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:50.718024  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:50.783054  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:51.171076  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:51.214125  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:51.282605  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:51.670953  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:51.712840  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:51.783279  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:52.169574  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:52.214104  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:52.283931  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:52.670798  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:52.715513  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:52.783680  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:53.171390  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:53.213786  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:53.284676  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:53.670609  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:53.715540  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:53.787485  130870 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1208 03:41:54.172231  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:54.216543  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:54.285699  130870 kapi.go:107] duration metric: took 1m12.506977536s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1208 03:41:54.671524  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:54.772084  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:55.171178  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:55.212978  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1208 03:41:55.670990  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:55.712994  130870 kapi.go:107] duration metric: took 1m12.504195406s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1208 03:41:56.170748  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:56.670153  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:57.171963  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:57.707832  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:58.173736  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:58.670810  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:59.171453  130870 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1208 03:41:59.670719  130870 kapi.go:107] duration metric: took 1m15.004127101s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1208 03:41:59.672410  130870 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-301052 cluster.
	I1208 03:41:59.673805  130870 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1208 03:41:59.674990  130870 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1208 03:41:59.676303  130870 out.go:179] * Enabled addons: ingress-dns, default-storageclass, cloud-spanner, storage-provisioner-rancher, storage-provisioner, registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1208 03:41:59.677362  130870 addons.go:530] duration metric: took 1m27.472103741s for enable addons: enabled=[ingress-dns default-storageclass cloud-spanner storage-provisioner-rancher storage-provisioner registry-creds nvidia-device-plugin amd-gpu-device-plugin inspektor-gadget metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1208 03:41:59.677413  130870 start.go:247] waiting for cluster config update ...
	I1208 03:41:59.677438  130870 start.go:256] writing updated cluster config ...
	I1208 03:41:59.677749  130870 ssh_runner.go:195] Run: rm -f paused
	I1208 03:41:59.684150  130870 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 03:41:59.771653  130870 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z7cr6" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 03:41:59.777518  130870 pod_ready.go:94] pod "coredns-66bc5c9577-z7cr6" is "Ready"
	I1208 03:41:59.777556  130870 pod_ready.go:86] duration metric: took 5.859172ms for pod "coredns-66bc5c9577-z7cr6" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 03:41:59.779641  130870 pod_ready.go:83] waiting for pod "etcd-addons-301052" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 03:41:59.785023  130870 pod_ready.go:94] pod "etcd-addons-301052" is "Ready"
	I1208 03:41:59.785052  130870 pod_ready.go:86] duration metric: took 5.385993ms for pod "etcd-addons-301052" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 03:41:59.787089  130870 pod_ready.go:83] waiting for pod "kube-apiserver-addons-301052" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 03:41:59.791726  130870 pod_ready.go:94] pod "kube-apiserver-addons-301052" is "Ready"
	I1208 03:41:59.791747  130870 pod_ready.go:86] duration metric: took 4.633015ms for pod "kube-apiserver-addons-301052" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 03:41:59.793689  130870 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-301052" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 03:42:00.089020  130870 pod_ready.go:94] pod "kube-controller-manager-addons-301052" is "Ready"
	I1208 03:42:00.089050  130870 pod_ready.go:86] duration metric: took 295.34037ms for pod "kube-controller-manager-addons-301052" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 03:42:00.289428  130870 pod_ready.go:83] waiting for pod "kube-proxy-7c4kr" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 03:42:00.688662  130870 pod_ready.go:94] pod "kube-proxy-7c4kr" is "Ready"
	I1208 03:42:00.688741  130870 pod_ready.go:86] duration metric: took 399.27566ms for pod "kube-proxy-7c4kr" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 03:42:00.888483  130870 pod_ready.go:83] waiting for pod "kube-scheduler-addons-301052" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 03:42:01.288759  130870 pod_ready.go:94] pod "kube-scheduler-addons-301052" is "Ready"
	I1208 03:42:01.288787  130870 pod_ready.go:86] duration metric: took 400.265679ms for pod "kube-scheduler-addons-301052" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 03:42:01.288801  130870 pod_ready.go:40] duration metric: took 1.604610295s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 03:42:01.336886  130870 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1208 03:42:01.338697  130870 out.go:179] * Done! kubectl is now configured to use "addons-301052" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.749570341Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93921074-51cc-4834-ba43-26abcaf5d7b1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.749628989Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93921074-51cc-4834-ba43-26abcaf5d7b1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.749933474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de67f2afb05a4dedc21ce9cdbdaaf459bb269898be655b5b5676945ee9a2f3cc,PodSandboxId:b20236486a064f4d7ef2a28f870cd90f014896bde817043f42731cec1fd882f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765165368466538876,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef7f12e8-972f-418c-8608-d62b63b98950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b595b0b25f31e2bf7cc4ffa9062177599906802d9176b3ae5c158d48a60373fb,PodSandboxId:91974d99084c7a40d619c8508001da4363ea98044a0b33e9fa4979c556ba3b73,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765165326072584510,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57b67a40-1452-43b4-aa1c-f17676388dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce1989799411b2f7ff90e1e217092df8f96b814a3be72c666f51529d1b848c5,PodSandboxId:cf4fc12546f0fb1e06e9a075a90169b296e324f9d7b721cb1ef4156a9586ad37,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1765165313338332935,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-bj9np,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5ab5234c-77d2-4257-8bee-62465621b4de,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:30039ae77472cd3b95e457cd22e1c7bee1a5821218287aa2c29d2ee366316180,PodSandboxId:5d0a8f791eadef27f90da640c1c4ebf48f42e22e91ba0b1bb6e393adc8bf5321,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765165294456030896,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qdld5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45eda127-bba9-4b4b-8273-e1dce8914f1a,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a4370f0039a1ced9b33f20410d8d5b16a4761e10ee26cc4164b5776b4b05f0,PodSandboxId:6445a397066abd00b34157ecef558f8b247ee30501b3281c32f4fc641a447fc8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765165292553601046,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ckkz4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 37f4923c-6ada-495e-a07b-092d1cac4632,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:645a868efff329f0d7e4253c6dbdc8e26cf5f08f22ab6bca7b2da1fdf6cf380d,PodSandboxId:ccc04dae84e8c23741e82d441ab38d72993390ff06b52ca61049e1cb31607097,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765165277206651224,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4969ec9d-ea71-402c-b994-d7d4204d91e6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467865c06eb44dc372c4b8a50808c67951897c5de5bea38de98416830d4ec56c,PodSandboxId:f2f0e02573dff400b702a48e25cad298f1876b70da2f222080e1e88f049a1db7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765165256782420883,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-mn6gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c7c111-d878-4bea-8f1c-64b08778e73b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709564618ae54f120db30996e41815d9eb651d09fafe7966c6d0727a3827f788,PodSandboxId:aab2aac1e6f835328cdaf7380abbd6dda8b63f2d623c5cd7fc5c60f5423eab69,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765165240871901120,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823c161b-93d0-4c6c-851d-3820d95a4ea5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45cf0e8ab6a77d2ea4dac6f4fa16358bec5dec74634e7ab68d5f46552d686d23,PodSandboxId:85ee96f590968746abaa3ba0a6191e42cc35d62501b75290c4e4af2a633d2eda,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765165234862765581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7c4kr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58c183-e8c6-40c7-8fb9-3cc7bbc35eef,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e37ee755338b96f8d3c09b255384598ff0852611f882c9e070403eaebbd672,PodSandboxId:8933d5386813b6d7fa3f44699f7ca8058dd42659b6252646bfe5a3d1a1fb408d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a3
67cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765165234737566827,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z7cr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f39adb02-b124-455f-b9aa-bb4f34c022f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fbb685fba1fcc30f7ad193348281fd444676bb14467bbaa07222dff97ff2905,PodSandboxId:80c4f0db2cc2717f43d6da71afb40f89645cfa67366cd3642a6bbe8910d183f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765165219762513901,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3596a1bf19d5e7e43177de11b99a68da,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc261a39952310881b167a21e143ae5b3e26f0c7805acbf8ab6c523a9702b42,PodSandboxId:af39ff2a0cefe208ebf470456ce01e1467af61396c8195cd2a06345262ae18c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765165219736024245,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42bd624619baae4c5f162c2e2b4c9559,},Annotations:map[string]string{io.kubernetes.container.hash: 53c
47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f320deceed20f49e1c4a0e65056562da25f2ed8f0f233fee06d3c8b77092ee9e,PodSandboxId:1afa54fabac5bbc94a9ecc9ef94a9792d3c58b7cc302d7681074bffbb31980f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765165219727450267,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f9c442eeafd13b0c55fc20762ee08821,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c033572e4caf3630aefd5cc91b1072ff48902921a38ae6bc4f74dc0e5a2deea,PodSandboxId:a900e2a6ac08ed5c078688583ba829877830a7e51c118eeb90e5cb3b11ed66aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765165219705978066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9487eb24755baafb7e85954efbf3df3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=93921074-51cc-4834-ba43-26abcaf5d7b1 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.783272293Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2911925e-5222-42cf-aa9a-d02ce274550f name=/runtime.v1.RuntimeService/Version
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.783560258Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2911925e-5222-42cf-aa9a-d02ce274550f name=/runtime.v1.RuntimeService/Version
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.784962092Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dd31bc55-357a-497d-bec7-2baf06989d90 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.786198564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765165509786173976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dd31bc55-357a-497d-bec7-2baf06989d90 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.787170040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=048b3767-9429-4b71-904a-e0cae24f7e64 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.787236520Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=048b3767-9429-4b71-904a-e0cae24f7e64 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.787553153Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de67f2afb05a4dedc21ce9cdbdaaf459bb269898be655b5b5676945ee9a2f3cc,PodSandboxId:b20236486a064f4d7ef2a28f870cd90f014896bde817043f42731cec1fd882f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765165368466538876,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef7f12e8-972f-418c-8608-d62b63b98950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b595b0b25f31e2bf7cc4ffa9062177599906802d9176b3ae5c158d48a60373fb,PodSandboxId:91974d99084c7a40d619c8508001da4363ea98044a0b33e9fa4979c556ba3b73,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765165326072584510,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57b67a40-1452-43b4-aa1c-f17676388dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce1989799411b2f7ff90e1e217092df8f96b814a3be72c666f51529d1b848c5,PodSandboxId:cf4fc12546f0fb1e06e9a075a90169b296e324f9d7b721cb1ef4156a9586ad37,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1765165313338332935,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-bj9np,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5ab5234c-77d2-4257-8bee-62465621b4de,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:30039ae77472cd3b95e457cd22e1c7bee1a5821218287aa2c29d2ee366316180,PodSandboxId:5d0a8f791eadef27f90da640c1c4ebf48f42e22e91ba0b1bb6e393adc8bf5321,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765165294456030896,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qdld5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45eda127-bba9-4b4b-8273-e1dce8914f1a,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a4370f0039a1ced9b33f20410d8d5b16a4761e10ee26cc4164b5776b4b05f0,PodSandboxId:6445a397066abd00b34157ecef558f8b247ee30501b3281c32f4fc641a447fc8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765165292553601046,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ckkz4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 37f4923c-6ada-495e-a07b-092d1cac4632,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:645a868efff329f0d7e4253c6dbdc8e26cf5f08f22ab6bca7b2da1fdf6cf380d,PodSandboxId:ccc04dae84e8c23741e82d441ab38d72993390ff06b52ca61049e1cb31607097,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765165277206651224,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4969ec9d-ea71-402c-b994-d7d4204d91e6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467865c06eb44dc372c4b8a50808c67951897c5de5bea38de98416830d4ec56c,PodSandboxId:f2f0e02573dff400b702a48e25cad298f1876b70da2f222080e1e88f049a1db7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765165256782420883,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-mn6gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c7c111-d878-4bea-8f1c-64b08778e73b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709564618ae54f120db30996e41815d9eb651d09fafe7966c6d0727a3827f788,PodSandboxId:aab2aac1e6f835328cdaf7380abbd6dda8b63f2d623c5cd7fc5c60f5423eab69,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765165240871901120,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823c161b-93d0-4c6c-851d-3820d95a4ea5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45cf0e8ab6a77d2ea4dac6f4fa16358bec5dec74634e7ab68d5f46552d686d23,PodSandboxId:85ee96f590968746abaa3ba0a6191e42cc35d62501b75290c4e4af2a633d2eda,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765165234862765581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7c4kr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58c183-e8c6-40c7-8fb9-3cc7bbc35eef,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e37ee755338b96f8d3c09b255384598ff0852611f882c9e070403eaebbd672,PodSandboxId:8933d5386813b6d7fa3f44699f7ca8058dd42659b6252646bfe5a3d1a1fb408d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a3
67cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765165234737566827,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z7cr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f39adb02-b124-455f-b9aa-bb4f34c022f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fbb685fba1fcc30f7ad193348281fd444676bb14467bbaa07222dff97ff2905,PodSandboxId:80c4f0db2cc2717f43d6da71afb40f89645cfa67366cd3642a6bbe8910d183f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765165219762513901,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3596a1bf19d5e7e43177de11b99a68da,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc261a39952310881b167a21e143ae5b3e26f0c7805acbf8ab6c523a9702b42,PodSandboxId:af39ff2a0cefe208ebf470456ce01e1467af61396c8195cd2a06345262ae18c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765165219736024245,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42bd624619baae4c5f162c2e2b4c9559,},Annotations:map[string]string{io.kubernetes.container.hash: 53c
47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f320deceed20f49e1c4a0e65056562da25f2ed8f0f233fee06d3c8b77092ee9e,PodSandboxId:1afa54fabac5bbc94a9ecc9ef94a9792d3c58b7cc302d7681074bffbb31980f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765165219727450267,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f9c442eeafd13b0c55fc20762ee08821,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c033572e4caf3630aefd5cc91b1072ff48902921a38ae6bc4f74dc0e5a2deea,PodSandboxId:a900e2a6ac08ed5c078688583ba829877830a7e51c118eeb90e5cb3b11ed66aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765165219705978066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9487eb24755baafb7e85954efbf3df3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=048b3767-9429-4b71-904a-e0cae24f7e64 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.814836258Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=106cc81b-d282-48b4-adfe-25c432d791a7 name=/runtime.v1.RuntimeService/Version
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.814928267Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=106cc81b-d282-48b4-adfe-25c432d791a7 name=/runtime.v1.RuntimeService/Version
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.816100573Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b653365c-bfb3-4e20-b8b6-b08697d773c5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.817563722Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765165509817537475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b653365c-bfb3-4e20-b8b6-b08697d773c5 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.818404195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc9085f3-9ddf-4c13-8ce4-d09bd50a1663 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.818532690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc9085f3-9ddf-4c13-8ce4-d09bd50a1663 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.819092883Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de67f2afb05a4dedc21ce9cdbdaaf459bb269898be655b5b5676945ee9a2f3cc,PodSandboxId:b20236486a064f4d7ef2a28f870cd90f014896bde817043f42731cec1fd882f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765165368466538876,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef7f12e8-972f-418c-8608-d62b63b98950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b595b0b25f31e2bf7cc4ffa9062177599906802d9176b3ae5c158d48a60373fb,PodSandboxId:91974d99084c7a40d619c8508001da4363ea98044a0b33e9fa4979c556ba3b73,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765165326072584510,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57b67a40-1452-43b4-aa1c-f17676388dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce1989799411b2f7ff90e1e217092df8f96b814a3be72c666f51529d1b848c5,PodSandboxId:cf4fc12546f0fb1e06e9a075a90169b296e324f9d7b721cb1ef4156a9586ad37,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1765165313338332935,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-bj9np,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5ab5234c-77d2-4257-8bee-62465621b4de,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:30039ae77472cd3b95e457cd22e1c7bee1a5821218287aa2c29d2ee366316180,PodSandboxId:5d0a8f791eadef27f90da640c1c4ebf48f42e22e91ba0b1bb6e393adc8bf5321,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765165294456030896,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qdld5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45eda127-bba9-4b4b-8273-e1dce8914f1a,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a4370f0039a1ced9b33f20410d8d5b16a4761e10ee26cc4164b5776b4b05f0,PodSandboxId:6445a397066abd00b34157ecef558f8b247ee30501b3281c32f4fc641a447fc8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765165292553601046,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ckkz4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 37f4923c-6ada-495e-a07b-092d1cac4632,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:645a868efff329f0d7e4253c6dbdc8e26cf5f08f22ab6bca7b2da1fdf6cf380d,PodSandboxId:ccc04dae84e8c23741e82d441ab38d72993390ff06b52ca61049e1cb31607097,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765165277206651224,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4969ec9d-ea71-402c-b994-d7d4204d91e6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467865c06eb44dc372c4b8a50808c67951897c5de5bea38de98416830d4ec56c,PodSandboxId:f2f0e02573dff400b702a48e25cad298f1876b70da2f222080e1e88f049a1db7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765165256782420883,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-mn6gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c7c111-d878-4bea-8f1c-64b08778e73b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709564618ae54f120db30996e41815d9eb651d09fafe7966c6d0727a3827f788,PodSandboxId:aab2aac1e6f835328cdaf7380abbd6dda8b63f2d623c5cd7fc5c60f5423eab69,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765165240871901120,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823c161b-93d0-4c6c-851d-3820d95a4ea5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45cf0e8ab6a77d2ea4dac6f4fa16358bec5dec74634e7ab68d5f46552d686d23,PodSandboxId:85ee96f590968746abaa3ba0a6191e42cc35d62501b75290c4e4af2a633d2eda,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765165234862765581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7c4kr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58c183-e8c6-40c7-8fb9-3cc7bbc35eef,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e37ee755338b96f8d3c09b255384598ff0852611f882c9e070403eaebbd672,PodSandboxId:8933d5386813b6d7fa3f44699f7ca8058dd42659b6252646bfe5a3d1a1fb408d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a3
67cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765165234737566827,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z7cr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f39adb02-b124-455f-b9aa-bb4f34c022f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fbb685fba1fcc30f7ad193348281fd444676bb14467bbaa07222dff97ff2905,PodSandboxId:80c4f0db2cc2717f43d6da71afb40f89645cfa67366cd3642a6bbe8910d183f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765165219762513901,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3596a1bf19d5e7e43177de11b99a68da,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc261a39952310881b167a21e143ae5b3e26f0c7805acbf8ab6c523a9702b42,PodSandboxId:af39ff2a0cefe208ebf470456ce01e1467af61396c8195cd2a06345262ae18c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765165219736024245,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42bd624619baae4c5f162c2e2b4c9559,},Annotations:map[string]string{io.kubernetes.container.hash: 53c
47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f320deceed20f49e1c4a0e65056562da25f2ed8f0f233fee06d3c8b77092ee9e,PodSandboxId:1afa54fabac5bbc94a9ecc9ef94a9792d3c58b7cc302d7681074bffbb31980f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765165219727450267,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f9c442eeafd13b0c55fc20762ee08821,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c033572e4caf3630aefd5cc91b1072ff48902921a38ae6bc4f74dc0e5a2deea,PodSandboxId:a900e2a6ac08ed5c078688583ba829877830a7e51c118eeb90e5cb3b11ed66aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765165219705978066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9487eb24755baafb7e85954efbf3df3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc9085f3-9ddf-4c13-8ce4-d09bd50a1663 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.847119771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad90c914-fe9e-4d10-90b9-d5f9c1515e88 name=/runtime.v1.RuntimeService/Version
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.847200914Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad90c914-fe9e-4d10-90b9-d5f9c1515e88 name=/runtime.v1.RuntimeService/Version
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.848914773Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=793ec751-1b92-4667-bf48-30d727895dce name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.850553197Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765165509850529126,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:585495,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=793ec751-1b92-4667-bf48-30d727895dce name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.851593488Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5678fee9-15eb-40d4-bb58-1f3acce6524b name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.851759881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5678fee9-15eb-40d4-bb58-1f3acce6524b name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.852245104Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:de67f2afb05a4dedc21ce9cdbdaaf459bb269898be655b5b5676945ee9a2f3cc,PodSandboxId:b20236486a064f4d7ef2a28f870cd90f014896bde817043f42731cec1fd882f5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1765165368466538876,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ef7f12e8-972f-418c-8608-d62b63b98950,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b595b0b25f31e2bf7cc4ffa9062177599906802d9176b3ae5c158d48a60373fb,PodSandboxId:91974d99084c7a40d619c8508001da4363ea98044a0b33e9fa4979c556ba3b73,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765165326072584510,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 57b67a40-1452-43b4-aa1c-f17676388dbf,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce1989799411b2f7ff90e1e217092df8f96b814a3be72c666f51529d1b848c5,PodSandboxId:cf4fc12546f0fb1e06e9a075a90169b296e324f9d7b721cb1ef4156a9586ad37,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1765165313338332935,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-bj9np,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 5ab5234c-77d2-4257-8bee-62465621b4de,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:30039ae77472cd3b95e457cd22e1c7bee1a5821218287aa2c29d2ee366316180,PodSandboxId:5d0a8f791eadef27f90da640c1c4ebf48f42e22e91ba0b1bb6e393adc8bf5321,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765165294456030896,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-qdld5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 45eda127-bba9-4b4b-8273-e1dce8914f1a,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7a4370f0039a1ced9b33f20410d8d5b16a4761e10ee26cc4164b5776b4b05f0,PodSandboxId:6445a397066abd00b34157ecef558f8b247ee30501b3281c32f4fc641a447fc8,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1765165292553601046,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-ckkz4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 37f4923c-6ada-495e-a07b-092d1cac4632,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:645a868efff329f0d7e4253c6dbdc8e26cf5f08f22ab6bca7b2da1fdf6cf380d,PodSandboxId:ccc04dae84e8c23741e82d441ab38d72993390ff06b52ca61049e1cb31607097,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765165277206651224,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4969ec9d-ea71-402c-b994-d7d4204d91e6,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:467865c06eb44dc372c4b8a50808c67951897c5de5bea38de98416830d4ec56c,PodSandboxId:f2f0e02573dff400b702a48e25cad298f1876b70da2f222080e1e88f049a1db7,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765165256782420883,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-mn6gz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34c7c111-d878-4bea-8f1c-64b08778e73b,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:709564618ae54f120db30996e41815d9eb651d09fafe7966c6d0727a3827f788,PodSandboxId:aab2aac1e6f835328cdaf7380abbd6dda8b63f2d623c5cd7fc5c60f5423eab69,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765165240871901120,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 823c161b-93d0-4c6c-851d-3820d95a4ea5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45cf0e8ab6a77d2ea4dac6f4fa16358bec5dec74634e7ab68d5f46552d686d23,PodSandboxId:85ee96f590968746abaa3ba0a6191e42cc35d62501b75290c4e4af2a633d2eda,Metadata:&ContainerMetadata{Name:kube-proxy,Attemp
t:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765165234862765581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7c4kr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d58c183-e8c6-40c7-8fb9-3cc7bbc35eef,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6e37ee755338b96f8d3c09b255384598ff0852611f882c9e070403eaebbd672,PodSandboxId:8933d5386813b6d7fa3f44699f7ca8058dd42659b6252646bfe5a3d1a1fb408d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a3
67cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765165234737566827,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-z7cr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f39adb02-b124-455f-b9aa-bb4f34c022f4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fbb685fba1fcc30f7ad193348281fd444676bb14467bbaa07222dff97ff2905,PodSandboxId:80c4f0db2cc2717f43d6da71afb40f89645cfa67366cd3642a6bbe8910d183f1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765165219762513901,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3596a1bf19d5e7e43177de11b99a68da,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.k
ubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc261a39952310881b167a21e143ae5b3e26f0c7805acbf8ab6c523a9702b42,PodSandboxId:af39ff2a0cefe208ebf470456ce01e1467af61396c8195cd2a06345262ae18c7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765165219736024245,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42bd624619baae4c5f162c2e2b4c9559,},Annotations:map[string]string{io.kubernetes.container.hash: 53c
47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f320deceed20f49e1c4a0e65056562da25f2ed8f0f233fee06d3c8b77092ee9e,PodSandboxId:1afa54fabac5bbc94a9ecc9ef94a9792d3c58b7cc302d7681074bffbb31980f3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765165219727450267,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f9c442eeafd13b0c55fc20762ee08821,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c033572e4caf3630aefd5cc91b1072ff48902921a38ae6bc4f74dc0e5a2deea,PodSandboxId:a900e2a6ac08ed5c078688583ba829877830a7e51c118eeb90e5cb3b11ed66aa,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765165219705978066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: e
tcd-addons-301052,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9487eb24755baafb7e85954efbf3df3,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5678fee9-15eb-40d4-bb58-1f3acce6524b name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:45:09 addons-301052 crio[808]: time="2025-12-08 03:45:09.874315725Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	de67f2afb05a4       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago       Running             nginx                     0                   b20236486a064       nginx                                      default
	b595b0b25f31e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   91974d99084c7       busybox                                    default
	1ce1989799411       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27             3 minutes ago       Running             controller                0                   cf4fc12546f0f       ingress-nginx-controller-6c8bf45fb-bj9np   ingress-nginx
	30039ae77472c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago       Exited              patch                     0                   5d0a8f791eade       ingress-nginx-admission-patch-qdld5        ingress-nginx
	c7a4370f0039a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago       Exited              create                    0                   6445a397066ab       ingress-nginx-admission-create-ckkz4       ingress-nginx
	645a868efff32       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago       Running             minikube-ingress-dns      0                   ccc04dae84e8c       kube-ingress-dns-minikube                  kube-system
	467865c06eb44       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   f2f0e02573dff       amd-gpu-device-plugin-mn6gz                kube-system
	709564618ae54       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   aab2aac1e6f83       storage-provisioner                        kube-system
	45cf0e8ab6a77       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             4 minutes ago       Running             kube-proxy                0                   85ee96f590968       kube-proxy-7c4kr                           kube-system
	a6e37ee755338       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   8933d5386813b       coredns-66bc5c9577-z7cr6                   kube-system
	2fbb685fba1fc       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             4 minutes ago       Running             kube-scheduler            0                   80c4f0db2cc27       kube-scheduler-addons-301052               kube-system
	0dc261a399523       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             4 minutes ago       Running             kube-controller-manager   0                   af39ff2a0cefe       kube-controller-manager-addons-301052      kube-system
	f320deceed20f       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             4 minutes ago       Running             kube-apiserver            0                   1afa54fabac5b       kube-apiserver-addons-301052               kube-system
	2c033572e4caf       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   a900e2a6ac08e       etcd-addons-301052                         kube-system
	
	
	==> coredns [a6e37ee755338b96f8d3c09b255384598ff0852611f882c9e070403eaebbd672] <==
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	[INFO] Reloading complete
	[INFO] 127.0.0.1:34647 - 51955 "HINFO IN 5210262041541038279.2713969031462723015. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.037564941s
	[INFO] 10.244.0.23:47444 - 6140 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000460634s
	[INFO] 10.244.0.23:52156 - 32041 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000120084s
	[INFO] 10.244.0.23:43492 - 18720 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124677s
	[INFO] 10.244.0.23:40893 - 49998 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000145391s
	[INFO] 10.244.0.23:48441 - 748 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000090052s
	[INFO] 10.244.0.23:37366 - 782 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000133158s
	[INFO] 10.244.0.23:36217 - 48086 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003291914s
	[INFO] 10.244.0.23:42213 - 27919 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.003908309s
	[INFO] 10.244.0.28:51164 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000343688s
	[INFO] 10.244.0.28:52181 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000144018s
	
	
	==> describe nodes <==
	Name:               addons-301052
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-301052
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=730a0938e5fe3e95dced085e5e597b4345feecad
	                    minikube.k8s.io/name=addons-301052
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T03_40_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-301052
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 03:40:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-301052
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 03:45:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 03:43:31 +0000   Mon, 08 Dec 2025 03:40:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 03:43:31 +0000   Mon, 08 Dec 2025 03:40:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 03:43:31 +0000   Mon, 08 Dec 2025 03:40:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 03:43:31 +0000   Mon, 08 Dec 2025 03:40:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.103
	  Hostname:    addons-301052
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8d346d227a3494ebffe43f0ee3efd1d
	  System UUID:                e8d346d2-27a3-494e-bffe-43f0ee3efd1d
	  Boot ID:                    6a6149ae-760d-4566-bc6c-1aa8f15648d4
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m9s
	  default                     hello-world-app-5d498dc89-sdslz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-bj9np    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m30s
	  kube-system                 amd-gpu-device-plugin-mn6gz                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 coredns-66bc5c9577-z7cr6                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m38s
	  kube-system                 etcd-addons-301052                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m43s
	  kube-system                 kube-apiserver-addons-301052                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 kube-controller-manager-addons-301052       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m44s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  kube-system                 kube-proxy-7c4kr                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-scheduler-addons-301052                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m30s  kube-proxy       
	  Normal  Starting                 4m43s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m43s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m43s  kubelet          Node addons-301052 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m43s  kubelet          Node addons-301052 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m43s  kubelet          Node addons-301052 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m42s  kubelet          Node addons-301052 status is now: NodeReady
	  Normal  RegisteredNode           4m39s  node-controller  Node addons-301052 event: Registered Node addons-301052 in Controller
	
	
	==> dmesg <==
	[  +0.484163] kauditd_printk_skb: 285 callbacks suppressed
	[  +1.342782] kauditd_printk_skb: 395 callbacks suppressed
	[  +7.622901] kauditd_printk_skb: 305 callbacks suppressed
	[Dec 8 03:41] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.647693] kauditd_printk_skb: 26 callbacks suppressed
	[  +9.728423] kauditd_printk_skb: 23 callbacks suppressed
	[  +9.035294] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.019597] kauditd_printk_skb: 80 callbacks suppressed
	[  +1.006170] kauditd_printk_skb: 115 callbacks suppressed
	[  +4.719225] kauditd_printk_skb: 88 callbacks suppressed
	[  +0.000113] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.070195] kauditd_printk_skb: 62 callbacks suppressed
	[Dec 8 03:42] kauditd_printk_skb: 17 callbacks suppressed
	[ +13.097318] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.000780] kauditd_printk_skb: 22 callbacks suppressed
	[  +1.302931] kauditd_printk_skb: 107 callbacks suppressed
	[  +4.078293] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.455995] kauditd_printk_skb: 120 callbacks suppressed
	[  +3.739923] kauditd_printk_skb: 156 callbacks suppressed
	[  +2.507060] kauditd_printk_skb: 85 callbacks suppressed
	[  +4.423570] kauditd_printk_skb: 32 callbacks suppressed
	[Dec 8 03:43] kauditd_printk_skb: 30 callbacks suppressed
	[  +0.000282] kauditd_printk_skb: 62 callbacks suppressed
	[  +7.845661] kauditd_printk_skb: 41 callbacks suppressed
	[Dec 8 03:45] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [2c033572e4caf3630aefd5cc91b1072ff48902921a38ae6bc4f74dc0e5a2deea] <==
	{"level":"warn","ts":"2025-12-08T03:41:28.960130Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"214.427586ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-08T03:41:28.960216Z","caller":"traceutil/trace.go:172","msg":"trace[1044816213] range","detail":"{range_begin:/registry/controllerrevisions; range_end:; response_count:0; response_revision:995; }","duration":"215.383474ms","start":"2025-12-08T03:41:28.744782Z","end":"2025-12-08T03:41:28.960165Z","steps":["trace[1044816213] 'agreement among raft nodes before linearized reading'  (duration: 214.405714ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T03:41:28.957153Z","caller":"traceutil/trace.go:172","msg":"trace[1754549274] transaction","detail":"{read_only:false; response_revision:995; number_of_response:1; }","duration":"149.428813ms","start":"2025-12-08T03:41:28.807712Z","end":"2025-12-08T03:41:28.957141Z","steps":["trace[1754549274] 'process raft request'  (duration: 149.307863ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T03:41:37.213105Z","caller":"traceutil/trace.go:172","msg":"trace[1935213483] linearizableReadLoop","detail":"{readStateIndex:1072; appliedIndex:1072; }","duration":"119.245874ms","start":"2025-12-08T03:41:37.093797Z","end":"2025-12-08T03:41:37.213042Z","steps":["trace[1935213483] 'read index received'  (duration: 119.241123ms)","trace[1935213483] 'applied index is now lower than readState.Index'  (duration: 4.101µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-08T03:41:37.213274Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.460924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-qdld5\" limit:1 ","response":"range_response_count:1 size:4635"}
	{"level":"info","ts":"2025-12-08T03:41:37.213295Z","caller":"traceutil/trace.go:172","msg":"trace[40001368] range","detail":"{range_begin:/registry/pods/ingress-nginx/ingress-nginx-admission-patch-qdld5; range_end:; response_count:1; response_revision:1038; }","duration":"119.495779ms","start":"2025-12-08T03:41:37.093793Z","end":"2025-12-08T03:41:37.213288Z","steps":["trace[40001368] 'agreement among raft nodes before linearized reading'  (duration: 119.429041ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T03:41:37.213320Z","caller":"traceutil/trace.go:172","msg":"trace[660356740] transaction","detail":"{read_only:false; response_revision:1039; number_of_response:1; }","duration":"161.806946ms","start":"2025-12-08T03:41:37.051500Z","end":"2025-12-08T03:41:37.213307Z","steps":["trace[660356740] 'process raft request'  (duration: 161.621073ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T03:41:43.359857Z","caller":"traceutil/trace.go:172","msg":"trace[981194662] linearizableReadLoop","detail":"{readStateIndex:1121; appliedIndex:1121; }","duration":"121.880397ms","start":"2025-12-08T03:41:43.237959Z","end":"2025-12-08T03:41:43.359840Z","steps":["trace[981194662] 'read index received'  (duration: 121.874993ms)","trace[981194662] 'applied index is now lower than readState.Index'  (duration: 4.496µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-08T03:41:43.360232Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.256205ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deviceclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-08T03:41:43.360252Z","caller":"traceutil/trace.go:172","msg":"trace[668778472] range","detail":"{range_begin:/registry/deviceclasses; range_end:; response_count:0; response_revision:1087; }","duration":"122.292274ms","start":"2025-12-08T03:41:43.237955Z","end":"2025-12-08T03:41:43.360247Z","steps":["trace[668778472] 'agreement among raft nodes before linearized reading'  (duration: 122.235464ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T03:41:43.360032Z","caller":"traceutil/trace.go:172","msg":"trace[1186481515] transaction","detail":"{read_only:false; response_revision:1087; number_of_response:1; }","duration":"144.396794ms","start":"2025-12-08T03:41:43.215627Z","end":"2025-12-08T03:41:43.360024Z","steps":["trace[1186481515] 'process raft request'  (duration: 144.291876ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-08T03:41:43.361976Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.221559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-12-08T03:41:43.362089Z","caller":"traceutil/trace.go:172","msg":"trace[2133983913] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1087; }","duration":"102.343255ms","start":"2025-12-08T03:41:43.259738Z","end":"2025-12-08T03:41:43.362081Z","steps":["trace[2133983913] 'agreement among raft nodes before linearized reading'  (duration: 100.712678ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T03:41:51.612520Z","caller":"traceutil/trace.go:172","msg":"trace[1302055486] transaction","detail":"{read_only:false; response_revision:1128; number_of_response:1; }","duration":"161.966721ms","start":"2025-12-08T03:41:51.450535Z","end":"2025-12-08T03:41:51.612501Z","steps":["trace[1302055486] 'process raft request'  (duration: 161.857757ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T03:42:31.093365Z","caller":"traceutil/trace.go:172","msg":"trace[1724223708] transaction","detail":"{read_only:false; response_revision:1367; number_of_response:1; }","duration":"122.922679ms","start":"2025-12-08T03:42:30.970422Z","end":"2025-12-08T03:42:31.093345Z","steps":["trace[1724223708] 'process raft request'  (duration: 122.505545ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T03:42:36.269763Z","caller":"traceutil/trace.go:172","msg":"trace[1827662002] linearizableReadLoop","detail":"{readStateIndex:1465; appliedIndex:1465; }","duration":"136.967741ms","start":"2025-12-08T03:42:36.132776Z","end":"2025-12-08T03:42:36.269743Z","steps":["trace[1827662002] 'read index received'  (duration: 136.960938ms)","trace[1827662002] 'applied index is now lower than readState.Index'  (duration: 5.804µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-08T03:42:36.270129Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.350849ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/test-pvc\" limit:1 ","response":"range_response_count:1 size:1837"}
	{"level":"info","ts":"2025-12-08T03:42:36.270168Z","caller":"traceutil/trace.go:172","msg":"trace[2087791428] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/test-pvc; range_end:; response_count:1; response_revision:1419; }","duration":"137.406646ms","start":"2025-12-08T03:42:36.132754Z","end":"2025-12-08T03:42:36.270161Z","steps":["trace[2087791428] 'agreement among raft nodes before linearized reading'  (duration: 137.198202ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T03:42:36.391570Z","caller":"traceutil/trace.go:172","msg":"trace[967568169] linearizableReadLoop","detail":"{readStateIndex:1466; appliedIndex:1466; }","duration":"121.546101ms","start":"2025-12-08T03:42:36.269934Z","end":"2025-12-08T03:42:36.391480Z","steps":["trace[967568169] 'read index received'  (duration: 121.537815ms)","trace[967568169] 'applied index is now lower than readState.Index'  (duration: 7.299µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-08T03:42:36.394264Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"200.519786ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-08T03:42:36.394310Z","caller":"traceutil/trace.go:172","msg":"trace[1145301337] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1419; }","duration":"200.575376ms","start":"2025-12-08T03:42:36.193723Z","end":"2025-12-08T03:42:36.394299Z","steps":["trace[1145301337] 'agreement among raft nodes before linearized reading'  (duration: 197.995443ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T03:42:36.394624Z","caller":"traceutil/trace.go:172","msg":"trace[1506426832] transaction","detail":"{read_only:false; response_revision:1420; number_of_response:1; }","duration":"235.532098ms","start":"2025-12-08T03:42:36.159079Z","end":"2025-12-08T03:42:36.394611Z","steps":["trace[1506426832] 'process raft request'  (duration: 232.851485ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T03:42:36.395132Z","caller":"traceutil/trace.go:172","msg":"trace[1359952185] transaction","detail":"{read_only:false; response_revision:1421; number_of_response:1; }","duration":"123.703148ms","start":"2025-12-08T03:42:36.271421Z","end":"2025-12-08T03:42:36.395125Z","steps":["trace[1359952185] 'process raft request'  (duration: 123.534023ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T03:42:59.768980Z","caller":"traceutil/trace.go:172","msg":"trace[1462135902] transaction","detail":"{read_only:false; response_revision:1602; number_of_response:1; }","duration":"262.956918ms","start":"2025-12-08T03:42:59.506009Z","end":"2025-12-08T03:42:59.768966Z","steps":["trace[1462135902] 'process raft request'  (duration: 262.846353ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T03:43:00.966183Z","caller":"traceutil/trace.go:172","msg":"trace[145528908] transaction","detail":"{read_only:false; response_revision:1605; number_of_response:1; }","duration":"229.308063ms","start":"2025-12-08T03:43:00.736831Z","end":"2025-12-08T03:43:00.966140Z","steps":["trace[145528908] 'process raft request'  (duration: 229.124099ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:45:10 up 5 min,  0 users,  load average: 0.43, 1.08, 0.57
	Linux addons-301052 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [f320deceed20f49e1c4a0e65056562da25f2ed8f0f233fee06d3c8b77092ee9e] <==
	 > logger="UnhandledError"
	E1208 03:41:09.377557       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.254.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.254.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.254.20:443: connect: connection refused" logger="UnhandledError"
	E1208 03:41:09.380696       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.254.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.254.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.254.20:443: connect: connection refused" logger="UnhandledError"
	E1208 03:41:09.384993       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.254.20:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.254.20:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.254.20:443: connect: connection refused" logger="UnhandledError"
	I1208 03:41:09.446180       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1208 03:42:12.124444       1 conn.go:339] Error on socket receive: read tcp 192.168.39.103:8443->192.168.39.1:51470: use of closed network connection
	E1208 03:42:12.325659       1 conn.go:339] Error on socket receive: read tcp 192.168.39.103:8443->192.168.39.1:51494: use of closed network connection
	I1208 03:42:21.706011       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.158.175"}
	I1208 03:42:43.836123       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1208 03:42:44.020999       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.108.137"}
	E1208 03:42:52.636881       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1208 03:43:07.071869       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1208 03:43:10.397173       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1208 03:43:23.496974       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1208 03:43:23.497153       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1208 03:43:23.531784       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1208 03:43:23.531869       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1208 03:43:23.556603       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1208 03:43:23.556688       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1208 03:43:23.571614       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1208 03:43:23.571686       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1208 03:43:24.537250       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1208 03:43:24.572339       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1208 03:43:24.707382       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1208 03:45:08.851154       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.2.84"}
	
	
	==> kube-controller-manager [0dc261a39952310881b167a21e143ae5b3e26f0c7805acbf8ab6c523a9702b42] <==
	I1208 03:43:31.541957       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1208 03:43:31.963718       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 03:43:31.964728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 03:43:32.622744       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 03:43:32.623852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 03:43:32.695003       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 03:43:32.696165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 03:43:38.650743       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 03:43:38.651947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 03:43:41.814835       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 03:43:41.816016       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 03:43:44.111713       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 03:43:44.113678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 03:44:01.340791       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 03:44:01.342117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 03:44:03.944639       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 03:44:03.945850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 03:44:07.200476       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 03:44:07.201534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 03:44:48.138841       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 03:44:48.139791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 03:44:53.200843       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 03:44:53.202377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1208 03:44:56.089729       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1208 03:44:56.090763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [45cf0e8ab6a77d2ea4dac6f4fa16358bec5dec74634e7ab68d5f46552d686d23] <==
	I1208 03:40:36.779581       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1208 03:40:36.944295       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1208 03:40:36.990006       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.103"]
	E1208 03:40:37.009525       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 03:40:39.423183       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1208 03:40:39.423265       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1208 03:40:39.423291       1 server_linux.go:132] "Using iptables Proxier"
	I1208 03:40:39.857599       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 03:40:39.874623       1 server.go:527] "Version info" version="v1.34.2"
	I1208 03:40:39.874646       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 03:40:39.905164       1 config.go:106] "Starting endpoint slice config controller"
	I1208 03:40:39.905382       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 03:40:39.907757       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 03:40:39.907794       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 03:40:39.917954       1 config.go:200] "Starting service config controller"
	I1208 03:40:39.917983       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 03:40:39.942230       1 config.go:309] "Starting node config controller"
	I1208 03:40:39.942280       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 03:40:39.942295       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1208 03:40:40.010773       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1208 03:40:40.025226       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 03:40:40.119149       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2fbb685fba1fcc30f7ad193348281fd444676bb14467bbaa07222dff97ff2905] <==
	E1208 03:40:23.435354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1208 03:40:23.439526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1208 03:40:23.440154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1208 03:40:23.442275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1208 03:40:23.442382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1208 03:40:23.442465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1208 03:40:23.442529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1208 03:40:23.443529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1208 03:40:23.443820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1208 03:40:23.444088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1208 03:40:23.444958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1208 03:40:24.269930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1208 03:40:24.312228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1208 03:40:24.317228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1208 03:40:24.359477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1208 03:40:24.372956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1208 03:40:24.577154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1208 03:40:24.645724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1208 03:40:24.681942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1208 03:40:24.695657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1208 03:40:24.718369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1208 03:40:24.722477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1208 03:40:24.766774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1208 03:40:24.915435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1208 03:40:26.820411       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 08 03:43:31 addons-301052 kubelet[1497]: I1208 03:43:31.133010    1497 scope.go:117] "RemoveContainer" containerID="5b57b21529abb5d954dbc09480117b7a6cd26b04714b66e14dfed0e747ec53e9"
	Dec 08 03:43:31 addons-301052 kubelet[1497]: I1208 03:43:31.251269    1497 scope.go:117] "RemoveContainer" containerID="1d13255e7b443e24580f5511e84bc90b15d8b3280e461f696cac7a72e8b470ba"
	Dec 08 03:43:37 addons-301052 kubelet[1497]: E1208 03:43:37.646698    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165417646354989 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:43:37 addons-301052 kubelet[1497]: E1208 03:43:37.646722    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165417646354989 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:43:47 addons-301052 kubelet[1497]: E1208 03:43:47.648783    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165427648494508 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:43:47 addons-301052 kubelet[1497]: E1208 03:43:47.649154    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165427648494508 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:43:57 addons-301052 kubelet[1497]: E1208 03:43:57.652188    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165437651757259 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:43:57 addons-301052 kubelet[1497]: E1208 03:43:57.652280    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165437651757259 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:44:07 addons-301052 kubelet[1497]: E1208 03:44:07.655363    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165447654931941 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:44:07 addons-301052 kubelet[1497]: E1208 03:44:07.655401    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165447654931941 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:44:17 addons-301052 kubelet[1497]: E1208 03:44:17.658556    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165457658227065 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:44:17 addons-301052 kubelet[1497]: E1208 03:44:17.658595    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165457658227065 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:44:27 addons-301052 kubelet[1497]: E1208 03:44:27.661247    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165467660875864 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:44:27 addons-301052 kubelet[1497]: E1208 03:44:27.661296    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165467660875864 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:44:30 addons-301052 kubelet[1497]: I1208 03:44:30.302309    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-mn6gz" secret="" err="secret \"gcp-auth\" not found"
	Dec 08 03:44:37 addons-301052 kubelet[1497]: E1208 03:44:37.664620    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165477664181411 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:44:37 addons-301052 kubelet[1497]: E1208 03:44:37.664666    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165477664181411 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:44:44 addons-301052 kubelet[1497]: I1208 03:44:44.302738    1497 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 08 03:44:47 addons-301052 kubelet[1497]: E1208 03:44:47.666741    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165487666394651 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:44:47 addons-301052 kubelet[1497]: E1208 03:44:47.666764    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165487666394651 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:44:57 addons-301052 kubelet[1497]: E1208 03:44:57.670689    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165497670027182 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:44:57 addons-301052 kubelet[1497]: E1208 03:44:57.670796    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165497670027182 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:45:07 addons-301052 kubelet[1497]: E1208 03:45:07.673590    1497 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765165507673232271 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:45:07 addons-301052 kubelet[1497]: E1208 03:45:07.673632    1497 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765165507673232271 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:585495} inodes_used:{value:192}}"
	Dec 08 03:45:08 addons-301052 kubelet[1497]: I1208 03:45:08.916722    1497 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qvls\" (UniqueName: \"kubernetes.io/projected/ad54be60-4b07-4b6b-8c16-0adec3518a16-kube-api-access-8qvls\") pod \"hello-world-app-5d498dc89-sdslz\" (UID: \"ad54be60-4b07-4b6b-8c16-0adec3518a16\") " pod="default/hello-world-app-5d498dc89-sdslz"
	
	
	==> storage-provisioner [709564618ae54f120db30996e41815d9eb651d09fafe7966c6d0727a3827f788] <==
	W1208 03:44:44.657497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:44:46.661034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:44:46.666525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:44:48.669202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:44:48.676834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:44:50.680476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:44:50.685479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:44:52.688542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:44:52.696904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:44:54.701326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:44:54.706129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:44:56.710623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:44:56.718833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:44:58.722135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:44:58.726976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:45:00.732184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:45:00.740506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:45:02.743290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:45:02.748349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:45:04.751897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:45:04.758413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:45:06.761821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:45:06.766514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:45:08.788778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:45:08.805019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-301052 -n addons-301052
helpers_test.go:269: (dbg) Run:  kubectl --context addons-301052 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-sdslz ingress-nginx-admission-create-ckkz4 ingress-nginx-admission-patch-qdld5
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-301052 describe pod hello-world-app-5d498dc89-sdslz ingress-nginx-admission-create-ckkz4 ingress-nginx-admission-patch-qdld5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-301052 describe pod hello-world-app-5d498dc89-sdslz ingress-nginx-admission-create-ckkz4 ingress-nginx-admission-patch-qdld5: exit status 1 (73.131644ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-sdslz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-301052/192.168.39.103
	Start Time:       Mon, 08 Dec 2025 03:45:08 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8qvls (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8qvls:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-sdslz to addons-301052
	  Normal  Pulling    1s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ckkz4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qdld5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-301052 describe pod hello-world-app-5d498dc89-sdslz ingress-nginx-admission-create-ckkz4 ingress-nginx-admission-patch-qdld5: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-301052 addons disable ingress-dns --alsologtostderr -v=1: (1.596543507s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-301052 addons disable ingress --alsologtostderr -v=1: (7.659073809s)
--- FAIL: TestAddons/parallel/Ingress (156.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (369.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [56d31cd6-195a-49c2-9465-7ec1179a0bb2] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005702926s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-940895 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-940895 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-940895 get pvc myclaim -o=json
I1208 03:53:24.977508  129900 retry.go:31] will retry after 1.796133142s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:4f960660-d2b5-4e3c-bbb6-98839f64bcdf ResourceVersion:889 Generation:0 CreationTimestamp:2025-12-08 03:53:24 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-4f960660-d2b5-4e3c-bbb6-98839f64bcdf StorageClassName:0xc0019ef4e0 VolumeMode:0xc0019ef4f0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-940895 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-940895 apply -f testdata/storage-provisioner/pod.yaml
I1208 03:53:26.995767  129900 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [bdbb7528-256a-4d9c-9641-dcdb820d6496] Pending
helpers_test.go:352: "sp-pod" [bdbb7528-256a-4d9c-9641-dcdb820d6496] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2025/12/08 03:53:30 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-940895 -n functional-940895
functional_test_pvc_test.go:140: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-12-08 03:59:27.225786973 +0000 UTC m=+1229.530109426
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-940895 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-940895 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-940895/192.168.39.191
Start Time:       Mon, 08 Dec 2025 03:53:26 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.13
IPs:
IP:  10.244.0.13
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-df9p4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-df9p4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  6m1s                  default-scheduler  Successfully assigned default/sp-pod to functional-940895
Warning  Failed     5m14s                 kubelet            Failed to pull image "docker.io/nginx": copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    56s (x10 over 5m14s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     56s (x10 over 5m14s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    45s (x5 over 5m58s)   kubelet            Pulling image "docker.io/nginx"
Warning  Failed     13s (x5 over 5m14s)   kubelet            Error: ErrImagePull
Warning  Failed     13s (x4 over 4m26s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-940895 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-940895 logs sp-pod -n default: exit status 1 (73.517615ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-940895 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-940895 -n functional-940895
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-940895 logs -n 25: (1.316448067s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                 ARGS                                                                 │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-940895 ssh sudo cat /usr/share/ca-certificates/1299002.pem                                                                │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	│ ssh            │ functional-940895 ssh -- ls -la /mount-9p                                                                                            │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	│ ssh            │ functional-940895 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                             │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	│ service        │ functional-940895 service hello-node --url                                                                                           │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	│ ssh            │ functional-940895 ssh sudo umount -f /mount-9p                                                                                       │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │                     │
	│ ssh            │ functional-940895 ssh echo hello                                                                                                     │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	│ mount          │ -p functional-940895 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3995165135/001:/mount3 --alsologtostderr -v=1 │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │                     │
	│ ssh            │ functional-940895 ssh findmnt -T /mount1                                                                                             │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │                     │
	│ mount          │ -p functional-940895 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3995165135/001:/mount1 --alsologtostderr -v=1 │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │                     │
	│ mount          │ -p functional-940895 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3995165135/001:/mount2 --alsologtostderr -v=1 │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │                     │
	│ ssh            │ functional-940895 ssh cat /etc/hostname                                                                                              │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	│ ssh            │ functional-940895 ssh findmnt -T /mount1                                                                                             │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	│ ssh            │ functional-940895 ssh findmnt -T /mount2                                                                                             │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	│ ssh            │ functional-940895 ssh findmnt -T /mount3                                                                                             │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	│ mount          │ -p functional-940895 --kill=true                                                                                                     │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │                     │
	│ update-context │ functional-940895 update-context --alsologtostderr -v=2                                                                              │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	│ update-context │ functional-940895 update-context --alsologtostderr -v=2                                                                              │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	│ update-context │ functional-940895 update-context --alsologtostderr -v=2                                                                              │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	│ image          │ functional-940895 image ls --format short --alsologtostderr                                                                          │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	│ image          │ functional-940895 image ls --format yaml --alsologtostderr                                                                           │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	│ ssh            │ functional-940895 ssh pgrep buildkitd                                                                                                │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │                     │
	│ image          │ functional-940895 image build -t localhost/my-image:functional-940895 testdata/build --alsologtostderr                               │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	│ image          │ functional-940895 image ls                                                                                                           │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	│ image          │ functional-940895 image ls --format json --alsologtostderr                                                                           │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	│ image          │ functional-940895 image ls --format table --alsologtostderr                                                                          │ functional-940895 │ jenkins │ v1.37.0 │ 08 Dec 25 03:53 UTC │ 08 Dec 25 03:53 UTC │
	└────────────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 03:53:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 03:53:16.847820  138536 out.go:360] Setting OutFile to fd 1 ...
	I1208 03:53:16.847977  138536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:53:16.847986  138536 out.go:374] Setting ErrFile to fd 2...
	I1208 03:53:16.847990  138536 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:53:16.848187  138536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	I1208 03:53:16.848609  138536 out.go:368] Setting JSON to false
	I1208 03:53:16.849430  138536 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2141,"bootTime":1765163856,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 03:53:16.849486  138536 start.go:143] virtualization: kvm guest
	I1208 03:53:16.851162  138536 out.go:179] * [functional-940895] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1208 03:53:16.852320  138536 out.go:179]   - MINIKUBE_LOCATION=21409
	I1208 03:53:16.852320  138536 notify.go:221] Checking for updates...
	I1208 03:53:16.854524  138536 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 03:53:16.855616  138536 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	I1208 03:53:16.856604  138536 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	I1208 03:53:16.857557  138536 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 03:53:16.858611  138536 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 03:53:16.860149  138536 config.go:182] Loaded profile config "functional-940895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 03:53:16.860637  138536 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 03:53:16.890908  138536 out.go:179] * Using the kvm2 driver based on existing profile
	I1208 03:53:16.891927  138536 start.go:309] selected driver: kvm2
	I1208 03:53:16.891939  138536 start.go:927] validating driver "kvm2" against &{Name:functional-940895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-940895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 03:53:16.892043  138536 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 03:53:16.892952  138536 cni.go:84] Creating CNI manager for ""
	I1208 03:53:16.893015  138536 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 03:53:16.893078  138536 start.go:353] cluster config:
	{Name:functional-940895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-940895 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 03:53:16.894249  138536 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.000686009Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765166368000661648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240578,},InodesUsed:&UInt64Value{Value:111,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d32a30ca-e1c4-4fbc-aa92-5113526f6f87 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.001605045Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c183e2f7-1c22-48ad-819a-1275ba46766e name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.001662239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c183e2f7-1c22-48ad-819a-1275ba46766e name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.002043627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:130d6e5950736ea9a73e7eff4a1c8f6212472661f45a1dfd85e1cfcd7919dbfd,PodSandboxId:0dadcf84c0e28286200e7bb650202d78beed93cff04922d08b99676c54b4cade,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1765166020393308726,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-844cf969f6-6m2t6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15d8820a-958c-4b84-b3f5-82fcd7c32a4b,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"co
ntainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c0c9332d2aa76dc350296b2615bfe92564f7888d3fe19f6525083a4848df7d4,PodSandboxId:ea0e8daa95db77439d44e6f0578a9dc1b5b456f70fbe4b4e2c956d1aa360e0ea,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1765166009249825560,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-b84665fb8-6b4qd,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 9912daad-6
acf-4a28-9670-75ac7eff5c93,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f467b0e8232ee55f8cc0ef573c2984d3910af854881fc2f6a633b354fa89be4,PodSandboxId:400a3c685749e2777c7e22e747b463f9a81d30ee340a95dc33f220807eb1d4d8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1765166002415817507,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-s
craper,io.kubernetes.pod.name: dashboard-metrics-scraper-5565989548-h2psq,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d89e7009-9a72-478c-8a2c-e3e441f5c3c8,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292a31de7c9f7ba74e0cbbb56dfe01ee58149c2ed639d990a88fac530b98a58e,PodSandboxId:8fe6a25991b6c5b2d2097304ea9e2af2e7e414fcb995e17762af5fee3a3cc480,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020
289c,State:CONTAINER_EXITED,CreatedAt:1765165994892175962,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74d484a5-bba5-4887-b8f5-0219ec3bf338,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ffb892b7acf9fee1045e551169c10a0d24f627e1a4f0dc26bb003271e5e73a,PodSandboxId:a92da308cd92dda4e89abaf28efbbcde1e40ca2d6b455b2bd184520adfd6ccc5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328
d30,State:CONTAINER_RUNNING,CreatedAt:1765165991314315587,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-5758569b79-kjbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1243389f-c888-4e1c-8617-67797cf33b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3c98c0418f4749929a3301dba83a3c02391660a082fdf9cbbef167a1fc55c1,PodSandboxId:726f5bd923d2a580195d7ad3e9daf48c233eca89aef26aa5ae27f4d5f91fcf69,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b8
99ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765165990354364643,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-9f67c86d4-4vrl9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4153c1bc-6662-4b9d-ab1b-9d68396b0b49,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2b887afda86b5e5e5220a7582295a1408d813af78a190c657815f94bc6f492,PodSandboxId:bc79734b7c3eadbc5314509422679942e96ac9bec539e891949dbdfc16c14beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a
302a562,State:CONTAINER_RUNNING,CreatedAt:1765165965095059899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56d31cd6-195a-49c2-9465-7ec1179a0bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c456926f133530d42b52258e457212eba968088c657cd9e7a43360fa5388ee,PodSandboxId:778e3add51703a4085b7aa3e7846622d11a8535b40249e18adaffe07ede41d31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_
RUNNING,CreatedAt:1765165965118726056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-k5vpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9a434c2-3d9b-4e4c-af7c-01916f966225,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3de100435730fda810bd3ca9f4b265ad514034612671c1f29819d1247ff0429,PodSandboxId:a19081eeb70a2899cabc04
92a7fccaa92cb904d77454ba9706da384ccc85acfa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765165962545824654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cd0ca4dfb215c4df58638a959907fc5,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:377904d89336d9a4d48877f72900ec67d14fb0f7abfb8f7ec2f896c435060d31,PodSandboxId:77434b4bb5c8ef74c07d4a2eb54c6b0f03afd9187fc73a6dc4c6a08f3bbd837e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765165960115471871,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb382bdd7480c5e71ebaced46966eb5d,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc59279d12df2d84984637c69019212986fad5917bf2ab0bb561ee567265fad4,PodSandboxId:b57fe9484e557a2863adb9790c7ed2aec54fbdd1ecdcaa40928e42bc109ec862,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765165960100779356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0587a6a24801aeaf5c2938faba7454f,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b5d5f82e2e118aef9704e7466b5d5e0677f23bea30a34d63508c9386d4ebc59,PodSandboxId:15abee1f36a9e7b1e8c76b489e183eadef39e554ddfb73d1c09bb08de4a77c8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765165960095950688,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8m5nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33c879ad-b4e4-4102-8abd-6a96ca44a096,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64587746e4e823a964852ed0f0e5e8f3e95684edcb4c3129cfaff31dea1d582,PodSandboxId:bc79734b7c3eadbc5314509422679942e96ac9bec539e891949dbdfc16c14beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765165960025479172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56d31cd6-195a-49c2-9465-7ec1179a0bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db40b4aafb30a320c57c5d305970d76e3ff2c0511fcd1363bda7c6d78a35964,PodSandboxId:c3e99a560aa7801cb5c161baf8bd9bccf7e5a6c13175b557b1f43cb6da3f1afe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765165959903869239,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1204570cb9e65bbd2704f13187cf1955,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd0744d0f09bd9d90fb331ae7b18a4fb177f3aa9dd22ea7906d8c4937df2c14,PodSandboxId:3a566c3d6d7784b741bc3f122cd0bb43708bfc28bbefdeba209fac9878b03236,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765165921945514755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-k5vpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9a434c2-3d9b-4e4c-af7c-01916f966225,},Annotations:map[string]string{io.kubernete
s.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be2f787e3a27495f4e77ae126536c8afb2629b093d7965f78bf2fcd5faff37e,PodSandboxId:349f8d21030dc86e10db43ee0c7cf72074681991df597a1b953e9532a3641789,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194
460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765165921637355412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8m5nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33c879ad-b4e4-4102-8abd-6a96ca44a096,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ac3319cd477190e7792028e2f32bfec5f40c5ed5926deea650cfb74a9e28a60,PodSandboxId:4391ed406327db858ddc176c7512612dee14a679f5c43ef7a1de0c571b3addd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Stat
e:CONTAINER_EXITED,CreatedAt:1765165918829227028,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0587a6a24801aeaf5c2938faba7454f,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29f91fc32c6cbd074acb90d4829ee19fdbd522e1d62463e4c61d2e44ef0d336f,PodSandboxId:daf422c4ddbee91100a06ce58178f9fd0c58711a8ec8fbbf0e38a9ed32561e62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765165918799893006,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb382bdd7480c5e71ebaced46966eb5d,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06467e4fe2af89c356ac6de6c5d5ae36dea10b4f50764762a3a575620254ba69,PodSandboxId:e4177c23065acb32e40d0fd44d5581635997419daa9ad4f49633ea95c4517485,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSp
ec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765165918793318749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1204570cb9e65bbd2704f13187cf1955,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c183e2f7-1c22-48ad-819a-1275ba46766e name=/runtime.v1.RuntimeService/List
Containers
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.041944517Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a3ef6e58-222b-4931-9b15-ee3c4b03b2f8 name=/runtime.v1.RuntimeService/Version
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.042014736Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a3ef6e58-222b-4931-9b15-ee3c4b03b2f8 name=/runtime.v1.RuntimeService/Version
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.043242200Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54d0093e-0ecc-4491-8548-05e9d0d9ae12 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.044087799Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765166368044061129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240578,},InodesUsed:&UInt64Value{Value:111,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54d0093e-0ecc-4491-8548-05e9d0d9ae12 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.045063233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=672c8315-0be2-4951-98ce-79754233d3fc name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.045292169Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=672c8315-0be2-4951-98ce-79754233d3fc name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.046527826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:130d6e5950736ea9a73e7eff4a1c8f6212472661f45a1dfd85e1cfcd7919dbfd,PodSandboxId:0dadcf84c0e28286200e7bb650202d78beed93cff04922d08b99676c54b4cade,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1765166020393308726,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-844cf969f6-6m2t6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15d8820a-958c-4b84-b3f5-82fcd7c32a4b,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"co
ntainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c0c9332d2aa76dc350296b2615bfe92564f7888d3fe19f6525083a4848df7d4,PodSandboxId:ea0e8daa95db77439d44e6f0578a9dc1b5b456f70fbe4b4e2c956d1aa360e0ea,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1765166009249825560,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-b84665fb8-6b4qd,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 9912daad-6
acf-4a28-9670-75ac7eff5c93,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f467b0e8232ee55f8cc0ef573c2984d3910af854881fc2f6a633b354fa89be4,PodSandboxId:400a3c685749e2777c7e22e747b463f9a81d30ee340a95dc33f220807eb1d4d8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1765166002415817507,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-s
craper,io.kubernetes.pod.name: dashboard-metrics-scraper-5565989548-h2psq,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d89e7009-9a72-478c-8a2c-e3e441f5c3c8,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292a31de7c9f7ba74e0cbbb56dfe01ee58149c2ed639d990a88fac530b98a58e,PodSandboxId:8fe6a25991b6c5b2d2097304ea9e2af2e7e414fcb995e17762af5fee3a3cc480,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020
289c,State:CONTAINER_EXITED,CreatedAt:1765165994892175962,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74d484a5-bba5-4887-b8f5-0219ec3bf338,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ffb892b7acf9fee1045e551169c10a0d24f627e1a4f0dc26bb003271e5e73a,PodSandboxId:a92da308cd92dda4e89abaf28efbbcde1e40ca2d6b455b2bd184520adfd6ccc5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328
d30,State:CONTAINER_RUNNING,CreatedAt:1765165991314315587,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-5758569b79-kjbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1243389f-c888-4e1c-8617-67797cf33b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3c98c0418f4749929a3301dba83a3c02391660a082fdf9cbbef167a1fc55c1,PodSandboxId:726f5bd923d2a580195d7ad3e9daf48c233eca89aef26aa5ae27f4d5f91fcf69,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b8
99ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765165990354364643,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-9f67c86d4-4vrl9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4153c1bc-6662-4b9d-ab1b-9d68396b0b49,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2b887afda86b5e5e5220a7582295a1408d813af78a190c657815f94bc6f492,PodSandboxId:bc79734b7c3eadbc5314509422679942e96ac9bec539e891949dbdfc16c14beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a
302a562,State:CONTAINER_RUNNING,CreatedAt:1765165965095059899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56d31cd6-195a-49c2-9465-7ec1179a0bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c456926f133530d42b52258e457212eba968088c657cd9e7a43360fa5388ee,PodSandboxId:778e3add51703a4085b7aa3e7846622d11a8535b40249e18adaffe07ede41d31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_
RUNNING,CreatedAt:1765165965118726056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-k5vpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9a434c2-3d9b-4e4c-af7c-01916f966225,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3de100435730fda810bd3ca9f4b265ad514034612671c1f29819d1247ff0429,PodSandboxId:a19081eeb70a2899cabc04
92a7fccaa92cb904d77454ba9706da384ccc85acfa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765165962545824654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cd0ca4dfb215c4df58638a959907fc5,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:377904d89336d9a4d48877f72900ec67d14fb0f7abfb8f7ec2f896c435060d31,PodSandboxId:77434b4bb5c8ef74c07d4a2eb54c6b0f03afd9187fc73a6dc4c6a08f3bbd837e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765165960115471871,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb382bdd7480c5e71ebaced46966eb5d,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc59279d12df2d84984637c69019212986fad5917bf2ab0bb561ee567265fad4,PodSandboxId:b57fe9484e557a2863adb9790c7ed2aec54fbdd1ecdcaa40928e42bc109ec862,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765165960100779356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0587a6a24801aeaf5c2938faba7454f,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b5d5f82e2e118aef9704e7466b5d5e0677f23bea30a34d63508c9386d4ebc59,PodSandboxId:15abee1f36a9e7b1e8c76b489e183eadef39e554ddfb73d1c09bb08de4a77c8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765165960095950688,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8m5nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33c879ad-b4e4-4102-8abd-6a96ca44a096,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64587746e4e823a964852ed0f0e5e8f3e95684edcb4c3129cfaff31dea1d582,PodSandboxId:bc79734b7c3eadbc5314509422679942e96ac9bec539e891949dbdfc16c14beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765165960025479172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56d31cd6-195a-49c2-9465-7ec1179a0bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db40b4aafb30a320c57c5d305970d76e3ff2c0511fcd1363bda7c6d78a35964,PodSandboxId:c3e99a560aa7801cb5c161baf8bd9bccf7e5a6c13175b557b1f43cb6da3f1afe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765165959903869239,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1204570cb9e65bbd2704f13187cf1955,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd0744d0f09bd9d90fb331ae7b18a4fb177f3aa9dd22ea7906d8c4937df2c14,PodSandboxId:3a566c3d6d7784b741bc3f122cd0bb43708bfc28bbefdeba209fac9878b03236,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765165921945514755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-k5vpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9a434c2-3d9b-4e4c-af7c-01916f966225,},Annotations:map[string]string{io.kubernete
s.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be2f787e3a27495f4e77ae126536c8afb2629b093d7965f78bf2fcd5faff37e,PodSandboxId:349f8d21030dc86e10db43ee0c7cf72074681991df597a1b953e9532a3641789,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194
460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765165921637355412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8m5nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33c879ad-b4e4-4102-8abd-6a96ca44a096,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ac3319cd477190e7792028e2f32bfec5f40c5ed5926deea650cfb74a9e28a60,PodSandboxId:4391ed406327db858ddc176c7512612dee14a679f5c43ef7a1de0c571b3addd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Stat
e:CONTAINER_EXITED,CreatedAt:1765165918829227028,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0587a6a24801aeaf5c2938faba7454f,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29f91fc32c6cbd074acb90d4829ee19fdbd522e1d62463e4c61d2e44ef0d336f,PodSandboxId:daf422c4ddbee91100a06ce58178f9fd0c58711a8ec8fbbf0e38a9ed32561e62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765165918799893006,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb382bdd7480c5e71ebaced46966eb5d,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06467e4fe2af89c356ac6de6c5d5ae36dea10b4f50764762a3a575620254ba69,PodSandboxId:e4177c23065acb32e40d0fd44d5581635997419daa9ad4f49633ea95c4517485,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSp
ec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765165918793318749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1204570cb9e65bbd2704f13187cf1955,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=672c8315-0be2-4951-98ce-79754233d3fc name=/runtime.v1.RuntimeService/List
Containers
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.076965111Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c11e47a7-1d04-43ea-9171-cbb066762d8d name=/runtime.v1.RuntimeService/Version
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.077042103Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c11e47a7-1d04-43ea-9171-cbb066762d8d name=/runtime.v1.RuntimeService/Version
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.078901823Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b61c1b4a-8d71-49ec-8bdb-a1f252e915ef name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.079858337Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765166368079833524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240578,},InodesUsed:&UInt64Value{Value:111,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b61c1b4a-8d71-49ec-8bdb-a1f252e915ef name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.080777780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c33fca74-b3a5-43c8-935a-5e08275e3421 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.080986851Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c33fca74-b3a5-43c8-935a-5e08275e3421 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.081534506Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:130d6e5950736ea9a73e7eff4a1c8f6212472661f45a1dfd85e1cfcd7919dbfd,PodSandboxId:0dadcf84c0e28286200e7bb650202d78beed93cff04922d08b99676c54b4cade,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1765166020393308726,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-844cf969f6-6m2t6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15d8820a-958c-4b84-b3f5-82fcd7c32a4b,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"co
ntainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c0c9332d2aa76dc350296b2615bfe92564f7888d3fe19f6525083a4848df7d4,PodSandboxId:ea0e8daa95db77439d44e6f0578a9dc1b5b456f70fbe4b4e2c956d1aa360e0ea,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1765166009249825560,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-b84665fb8-6b4qd,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 9912daad-6
acf-4a28-9670-75ac7eff5c93,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f467b0e8232ee55f8cc0ef573c2984d3910af854881fc2f6a633b354fa89be4,PodSandboxId:400a3c685749e2777c7e22e747b463f9a81d30ee340a95dc33f220807eb1d4d8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1765166002415817507,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-s
craper,io.kubernetes.pod.name: dashboard-metrics-scraper-5565989548-h2psq,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d89e7009-9a72-478c-8a2c-e3e441f5c3c8,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292a31de7c9f7ba74e0cbbb56dfe01ee58149c2ed639d990a88fac530b98a58e,PodSandboxId:8fe6a25991b6c5b2d2097304ea9e2af2e7e414fcb995e17762af5fee3a3cc480,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020
289c,State:CONTAINER_EXITED,CreatedAt:1765165994892175962,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74d484a5-bba5-4887-b8f5-0219ec3bf338,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ffb892b7acf9fee1045e551169c10a0d24f627e1a4f0dc26bb003271e5e73a,PodSandboxId:a92da308cd92dda4e89abaf28efbbcde1e40ca2d6b455b2bd184520adfd6ccc5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328
d30,State:CONTAINER_RUNNING,CreatedAt:1765165991314315587,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-5758569b79-kjbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1243389f-c888-4e1c-8617-67797cf33b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3c98c0418f4749929a3301dba83a3c02391660a082fdf9cbbef167a1fc55c1,PodSandboxId:726f5bd923d2a580195d7ad3e9daf48c233eca89aef26aa5ae27f4d5f91fcf69,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b8
99ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765165990354364643,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-9f67c86d4-4vrl9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4153c1bc-6662-4b9d-ab1b-9d68396b0b49,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2b887afda86b5e5e5220a7582295a1408d813af78a190c657815f94bc6f492,PodSandboxId:bc79734b7c3eadbc5314509422679942e96ac9bec539e891949dbdfc16c14beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a
302a562,State:CONTAINER_RUNNING,CreatedAt:1765165965095059899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56d31cd6-195a-49c2-9465-7ec1179a0bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c456926f133530d42b52258e457212eba968088c657cd9e7a43360fa5388ee,PodSandboxId:778e3add51703a4085b7aa3e7846622d11a8535b40249e18adaffe07ede41d31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_
RUNNING,CreatedAt:1765165965118726056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-k5vpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9a434c2-3d9b-4e4c-af7c-01916f966225,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3de100435730fda810bd3ca9f4b265ad514034612671c1f29819d1247ff0429,PodSandboxId:a19081eeb70a2899cabc04
92a7fccaa92cb904d77454ba9706da384ccc85acfa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765165962545824654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cd0ca4dfb215c4df58638a959907fc5,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:377904d89336d9a4d48877f72900ec67d14fb0f7abfb8f7ec2f896c435060d31,PodSandboxId:77434b4bb5c8ef74c07d4a2eb54c6b0f03afd9187fc73a6dc4c6a08f3bbd837e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765165960115471871,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb382bdd7480c5e71ebaced46966eb5d,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc59279d12df2d84984637c69019212986fad5917bf2ab0bb561ee567265fad4,PodSandboxId:b57fe9484e557a2863adb9790c7ed2aec54fbdd1ecdcaa40928e42bc109ec862,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765165960100779356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0587a6a24801aeaf5c2938faba7454f,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b5d5f82e2e118aef9704e7466b5d5e0677f23bea30a34d63508c9386d4ebc59,PodSandboxId:15abee1f36a9e7b1e8c76b489e183eadef39e554ddfb73d1c09bb08de4a77c8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765165960095950688,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8m5nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33c879ad-b4e4-4102-8abd-6a96ca44a096,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64587746e4e823a964852ed0f0e5e8f3e95684edcb4c3129cfaff31dea1d582,PodSandboxId:bc79734b7c3eadbc5314509422679942e96ac9bec539e891949dbdfc16c14beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765165960025479172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56d31cd6-195a-49c2-9465-7ec1179a0bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db40b4aafb30a320c57c5d305970d76e3ff2c0511fcd1363bda7c6d78a35964,PodSandboxId:c3e99a560aa7801cb5c161baf8bd9bccf7e5a6c13175b557b1f43cb6da3f1afe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765165959903869239,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1204570cb9e65bbd2704f13187cf1955,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd0744d0f09bd9d90fb331ae7b18a4fb177f3aa9dd22ea7906d8c4937df2c14,PodSandboxId:3a566c3d6d7784b741bc3f122cd0bb43708bfc28bbefdeba209fac9878b03236,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765165921945514755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-k5vpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9a434c2-3d9b-4e4c-af7c-01916f966225,},Annotations:map[string]string{io.kubernete
s.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be2f787e3a27495f4e77ae126536c8afb2629b093d7965f78bf2fcd5faff37e,PodSandboxId:349f8d21030dc86e10db43ee0c7cf72074681991df597a1b953e9532a3641789,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194
460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765165921637355412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8m5nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33c879ad-b4e4-4102-8abd-6a96ca44a096,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ac3319cd477190e7792028e2f32bfec5f40c5ed5926deea650cfb74a9e28a60,PodSandboxId:4391ed406327db858ddc176c7512612dee14a679f5c43ef7a1de0c571b3addd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Stat
e:CONTAINER_EXITED,CreatedAt:1765165918829227028,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0587a6a24801aeaf5c2938faba7454f,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29f91fc32c6cbd074acb90d4829ee19fdbd522e1d62463e4c61d2e44ef0d336f,PodSandboxId:daf422c4ddbee91100a06ce58178f9fd0c58711a8ec8fbbf0e38a9ed32561e62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765165918799893006,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb382bdd7480c5e71ebaced46966eb5d,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06467e4fe2af89c356ac6de6c5d5ae36dea10b4f50764762a3a575620254ba69,PodSandboxId:e4177c23065acb32e40d0fd44d5581635997419daa9ad4f49633ea95c4517485,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSp
ec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765165918793318749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1204570cb9e65bbd2704f13187cf1955,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c33fca74-b3a5-43c8-935a-5e08275e3421 name=/runtime.v1.RuntimeService/List
Containers
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.118828465Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e3e65f8-a705-462c-a8c5-f4c61a692882 name=/runtime.v1.RuntimeService/Version
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.118923011Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e3e65f8-a705-462c-a8c5-f4c61a692882 name=/runtime.v1.RuntimeService/Version
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.119974004Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=48d55e59-2e78-4c2d-baae-ce30067c9365 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.120700397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765166368120677996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:240578,},InodesUsed:&UInt64Value{Value:111,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48d55e59-2e78-4c2d-baae-ce30067c9365 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.122083532Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd4e3135-ef71-4b60-9bfc-149935cd7cee name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.122165655Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd4e3135-ef71-4b60-9bfc-149935cd7cee name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 03:59:28 functional-940895 crio[5233]: time="2025-12-08 03:59:28.122639920Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:130d6e5950736ea9a73e7eff4a1c8f6212472661f45a1dfd85e1cfcd7919dbfd,PodSandboxId:0dadcf84c0e28286200e7bb650202d78beed93cff04922d08b99676c54b4cade,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1765166020393308726,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-844cf969f6-6m2t6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 15d8820a-958c-4b84-b3f5-82fcd7c32a4b,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"co
ntainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c0c9332d2aa76dc350296b2615bfe92564f7888d3fe19f6525083a4848df7d4,PodSandboxId:ea0e8daa95db77439d44e6f0578a9dc1b5b456f70fbe4b4e2c956d1aa360e0ea,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1765166009249825560,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-b84665fb8-6b4qd,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 9912daad-6
acf-4a28-9670-75ac7eff5c93,},Annotations:map[string]string{io.kubernetes.container.hash: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f467b0e8232ee55f8cc0ef573c2984d3910af854881fc2f6a633b354fa89be4,PodSandboxId:400a3c685749e2777c7e22e747b463f9a81d30ee340a95dc33f220807eb1d4d8,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1765166002415817507,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-s
craper,io.kubernetes.pod.name: dashboard-metrics-scraper-5565989548-h2psq,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: d89e7009-9a72-478c-8a2c-e3e441f5c3c8,},Annotations:map[string]string{io.kubernetes.container.hash: 925d0c44,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:292a31de7c9f7ba74e0cbbb56dfe01ee58149c2ed639d990a88fac530b98a58e,PodSandboxId:8fe6a25991b6c5b2d2097304ea9e2af2e7e414fcb995e17762af5fee3a3cc480,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020
289c,State:CONTAINER_EXITED,CreatedAt:1765165994892175962,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74d484a5-bba5-4887-b8f5-0219ec3bf338,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34ffb892b7acf9fee1045e551169c10a0d24f627e1a4f0dc26bb003271e5e73a,PodSandboxId:a92da308cd92dda4e89abaf28efbbcde1e40ca2d6b455b2bd184520adfd6ccc5,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328
d30,State:CONTAINER_RUNNING,CreatedAt:1765165991314315587,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-5758569b79-kjbzs,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1243389f-c888-4e1c-8617-67797cf33b1f,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da3c98c0418f4749929a3301dba83a3c02391660a082fdf9cbbef167a1fc55c1,PodSandboxId:726f5bd923d2a580195d7ad3e9daf48c233eca89aef26aa5ae27f4d5f91fcf69,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b8
99ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1765165990354364643,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-9f67c86d4-4vrl9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4153c1bc-6662-4b9d-ab1b-9d68396b0b49,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae2b887afda86b5e5e5220a7582295a1408d813af78a190c657815f94bc6f492,PodSandboxId:bc79734b7c3eadbc5314509422679942e96ac9bec539e891949dbdfc16c14beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a
302a562,State:CONTAINER_RUNNING,CreatedAt:1765165965095059899,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56d31cd6-195a-49c2-9465-7ec1179a0bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13c456926f133530d42b52258e457212eba968088c657cd9e7a43360fa5388ee,PodSandboxId:778e3add51703a4085b7aa3e7846622d11a8535b40249e18adaffe07ede41d31,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_
RUNNING,CreatedAt:1765165965118726056,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-k5vpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9a434c2-3d9b-4e4c-af7c-01916f966225,},Annotations:map[string]string{io.kubernetes.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3de100435730fda810bd3ca9f4b265ad514034612671c1f29819d1247ff0429,PodSandboxId:a19081eeb70a2899cabc04
92a7fccaa92cb904d77454ba9706da384ccc85acfa,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b,State:CONTAINER_RUNNING,CreatedAt:1765165962545824654,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cd0ca4dfb215c4df58638a959907fc5,},Annotations:map[string]string{io.kubernetes.container.hash: b11f11f1,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:377904d89336d9a4d48877f72900ec67d14fb0f7abfb8f7ec2f896c435060d31,PodSandboxId:77434b4bb5c8ef74c07d4a2eb54c6b0f03afd9187fc73a6dc4c6a08f3bbd837e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_RUNNING,CreatedAt:1765165960115471871,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb382bdd7480c5e71ebaced46966eb5d,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc59279d12df2d84984637c69019212986fad5917bf2ab0bb561ee567265fad4,PodSandboxId:b57fe9484e557a2863adb9790c7ed2aec54fbdd1ecdcaa40928e42bc109ec862,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765165960100779356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0587a6a24801aeaf5c2938faba7454f,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b5d5f82e2e118aef9704e7466b5d5e0677f23bea30a34d63508c9386d4ebc59,PodSandboxId:15abee1f36a9e7b1e8c76b489e183eadef39e554ddfb73d1c09bb08de4a77c8e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_RUNNING,CreatedAt:1765165960095950688,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8m5nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33c879ad-b4e4-4102-8abd-6a96ca44a096,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b64587746e4e823a964852ed0f0e5e8f3e95684edcb4c3129cfaff31dea1d582,PodSandboxId:bc79734b7c3eadbc5314509422679942e96ac9bec539e891949dbdfc16c14beb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1765165960025479172,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 56d31cd6-195a-49c2-9465-7ec1179a0bb2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernet
es.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1db40b4aafb30a320c57c5d305970d76e3ff2c0511fcd1363bda7c6d78a35964,PodSandboxId:c3e99a560aa7801cb5c161baf8bd9bccf7e5a6c13175b557b1f43cb6da3f1afe,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_RUNNING,CreatedAt:1765165959903869239,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1204570cb9e65bbd2704f13187cf1955,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostP
ort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbd0744d0f09bd9d90fb331ae7b18a4fb177f3aa9dd22ea7906d8c4937df2c14,PodSandboxId:3a566c3d6d7784b741bc3f122cd0bb43708bfc28bbefdeba209fac9878b03236,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139,State:CONTAINER_EXITED,CreatedAt:1765165921945514755,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7d764666f9-k5vpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9a434c2-3d9b-4e4c-af7c-01916f966225,},Annotations:map[string]string{io.kubernete
s.container.hash: 593f44d9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be2f787e3a27495f4e77ae126536c8afb2629b093d7965f78bf2fcd5faff37e,PodSandboxId:349f8d21030dc86e10db43ee0c7cf72074681991df597a1b953e9532a3641789,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8a4ded35a3eb1a80eb49c1a887194
460a56b413eed7eb69e59605daf4ec23810,State:CONTAINER_EXITED,CreatedAt:1765165921637355412,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8m5nh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33c879ad-b4e4-4102-8abd-6a96ca44a096,},Annotations:map[string]string{io.kubernetes.container.hash: a94c7581,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ac3319cd477190e7792028e2f32bfec5f40c5ed5926deea650cfb74a9e28a60,PodSandboxId:4391ed406327db858ddc176c7512612dee14a679f5c43ef7a1de0c571b3addd4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Stat
e:CONTAINER_EXITED,CreatedAt:1765165918829227028,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0587a6a24801aeaf5c2938faba7454f,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29f91fc32c6cbd074acb90d4829ee19fdbd522e1d62463e4c61d2e44ef0d336f,PodSandboxId:daf422c4ddbee91100a06ce58178f9fd0c58711a8ec8fbbf0e38a9ed32561e62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc,State:CONTAINER_EXITED,CreatedAt:1765165918799893006,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb382bdd7480c5e71ebaced46966eb5d,},Annotations:map[string]string{io.kubernetes.container.hash: a67ffa3,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06467e4fe2af89c356ac6de6c5d5ae36dea10b4f50764762a3a575620254ba69,PodSandboxId:e4177c23065acb32e40d0fd44d5581635997419daa9ad4f49633ea95c4517485,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSp
ec{Image:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46,State:CONTAINER_EXITED,CreatedAt:1765165918793318749,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-940895,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1204570cb9e65bbd2704f13187cf1955,},Annotations:map[string]string{io.kubernetes.container.hash: bf369231,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd4e3135-ef71-4b60-9bfc-149935cd7cee name=/runtime.v1.RuntimeService/List
Containers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	130d6e5950736       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  5 minutes ago       Running             mysql                       0                   0dadcf84c0e28       mysql-844cf969f6-6m2t6                       default
	8c0c9332d2aa7       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         5 minutes ago       Running             kubernetes-dashboard        0                   ea0e8daa95db7       kubernetes-dashboard-b84665fb8-6b4qd         kubernetes-dashboard
	0f467b0e8232e       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   6 minutes ago       Running             dashboard-metrics-scraper   0                   400a3c685749e       dashboard-metrics-scraper-5565989548-h2psq   kubernetes-dashboard
	292a31de7c9f7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              6 minutes ago       Exited              mount-munger                0                   8fe6a25991b6c       busybox-mount                                default
	34ffb892b7acf       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6            6 minutes ago       Running             echo-server                 0                   a92da308cd92d       hello-node-5758569b79-kjbzs                  default
	da3c98c0418f4       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6            6 minutes ago       Running             echo-server                 0                   726f5bd923d2a       hello-node-connect-9f67c86d4-4vrl9           default
	13c456926f133       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                 6 minutes ago       Running             coredns                     2                   778e3add51703       coredns-7d764666f9-k5vpk                     kube-system
	ae2b887afda86       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 6 minutes ago       Running             storage-provisioner         3                   bc79734b7c3ea       storage-provisioner                          kube-system
	a3de100435730       aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b                                                 6 minutes ago       Running             kube-apiserver              0                   a19081eeb70a2       kube-apiserver-functional-940895             kube-system
	377904d89336d       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                                 6 minutes ago       Running             kube-controller-manager     2                   77434b4bb5c8e       kube-controller-manager-functional-940895    kube-system
	cc59279d12df2       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 6 minutes ago       Running             etcd                        2                   b57fe9484e557       etcd-functional-940895                       kube-system
	1b5d5f82e2e11       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                                 6 minutes ago       Running             kube-proxy                  2                   15abee1f36a9e       kube-proxy-8m5nh                             kube-system
	b64587746e4e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 6 minutes ago       Exited              storage-provisioner         2                   bc79734b7c3ea       storage-provisioner                          kube-system
	1db40b4aafb30       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                                 6 minutes ago       Running             kube-scheduler              2                   c3e99a560aa78       kube-scheduler-functional-940895             kube-system
	cbd0744d0f09b       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                 7 minutes ago       Exited              coredns                     1                   3a566c3d6d778       coredns-7d764666f9-k5vpk                     kube-system
	1be2f787e3a27       8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810                                                 7 minutes ago       Exited              kube-proxy                  1                   349f8d21030dc       kube-proxy-8m5nh                             kube-system
	2ac3319cd4771       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                 7 minutes ago       Exited              etcd                        1                   4391ed406327d       etcd-functional-940895                       kube-system
	29f91fc32c6cb       45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc                                                 7 minutes ago       Exited              kube-controller-manager     1                   daf422c4ddbee       kube-controller-manager-functional-940895    kube-system
	06467e4fe2af8       7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46                                                 7 minutes ago       Exited              kube-scheduler              1                   e4177c23065ac       kube-scheduler-functional-940895             kube-system
	
	
	==> coredns [13c456926f133530d42b52258e457212eba968088c657cd9e7a43360fa5388ee] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:42228 - 9009 "HINFO IN 1024840056985688686.6763943267458582777. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031144048s
	
	
	==> coredns [cbd0744d0f09bd9d90fb331ae7b18a4fb177f3aa9dd22ea7906d8c4937df2c14] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36269 - 41345 "HINFO IN 2938566803269363262.4942055636706344478. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023994547s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-940895
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-940895
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=730a0938e5fe3e95dced085e5e597b4345feecad
	                    minikube.k8s.io/name=functional-940895
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T03_51_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 03:51:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-940895
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 03:59:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 03:58:12 +0000   Mon, 08 Dec 2025 03:51:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 03:58:12 +0000   Mon, 08 Dec 2025 03:51:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 03:58:12 +0000   Mon, 08 Dec 2025 03:51:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 03:58:12 +0000   Mon, 08 Dec 2025 03:51:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.191
	  Hostname:    functional-940895
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 9657e1ff68aa484eaaf5601dd8d8f6b5
	  System UUID:                9657e1ff-68aa-484e-aaf5-601dd8d8f6b5
	  Boot ID:                    a85728b6-3a4f-4edf-af52-b5ab82b9643b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.35.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-kjbzs                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  default                     hello-node-connect-9f67c86d4-4vrl9            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  default                     mysql-844cf969f6-6m2t6                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    6m9s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-7d764666f9-k5vpk                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m18s
	  kube-system                 etcd-functional-940895                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m24s
	  kube-system                 kube-apiserver-functional-940895              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m43s
	  kube-system                 kube-controller-manager-functional-940895     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m24s
	  kube-system                 kube-proxy-8m5nh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 kube-scheduler-functional-940895              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m24s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-5565989548-h2psq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-6b4qd          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  8m20s  node-controller  Node functional-940895 event: Registered Node functional-940895 in Controller
	  Normal  RegisteredNode  7m25s  node-controller  Node functional-940895 event: Registered Node functional-940895 in Controller
	  Normal  RegisteredNode  6m41s  node-controller  Node functional-940895 event: Registered Node functional-940895 in Controller
	
	
	==> dmesg <==
	[Dec 8 03:50] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001206] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003326] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.152615] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.090235] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.100169] kauditd_printk_skb: 102 callbacks suppressed
	[Dec 8 03:51] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.687088] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.293778] kauditd_printk_skb: 248 callbacks suppressed
	[ +27.605682] kauditd_printk_skb: 45 callbacks suppressed
	[Dec 8 03:52] kauditd_printk_skb: 242 callbacks suppressed
	[ +15.117389] kauditd_printk_skb: 137 callbacks suppressed
	[  +0.111042] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.627934] kauditd_printk_skb: 78 callbacks suppressed
	[  +3.664913] kauditd_printk_skb: 314 callbacks suppressed
	[  +8.048403] kauditd_printk_skb: 36 callbacks suppressed
	[Dec 8 03:53] kauditd_printk_skb: 169 callbacks suppressed
	[  +5.778862] kauditd_printk_skb: 83 callbacks suppressed
	[  +1.435279] crun[9053]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +1.319703] kauditd_printk_skb: 130 callbacks suppressed
	[  +0.000026] kauditd_printk_skb: 15 callbacks suppressed
	[ +11.269820] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [2ac3319cd477190e7792028e2f32bfec5f40c5ed5926deea650cfb74a9e28a60] <==
	{"level":"warn","ts":"2025-12-08T03:51:59.952289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T03:51:59.965678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T03:51:59.985937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T03:51:59.988174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T03:51:59.997063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T03:52:00.001580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T03:52:00.044607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48260","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-08T03:52:25.219239Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-08T03:52:25.219324Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-940895","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.191:2380"],"advertise-client-urls":["https://192.168.39.191:2379"]}
	{"level":"error","ts":"2025-12-08T03:52:25.219436Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-08T03:52:25.305692Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-08T03:52:25.305792Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-08T03:52:25.305831Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f21a8e08563785d2","current-leader-member-id":"f21a8e08563785d2"}
	{"level":"info","ts":"2025-12-08T03:52:25.305915Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-08T03:52:25.305924Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-08T03:52:25.305996Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-08T03:52:25.306079Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-08T03:52:25.306088Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-08T03:52:25.306126Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.191:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-08T03:52:25.306133Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.191:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-08T03:52:25.306140Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.191:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-08T03:52:25.309159Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.191:2380"}
	{"level":"error","ts":"2025-12-08T03:52:25.309209Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.191:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-08T03:52:25.309231Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.191:2380"}
	{"level":"info","ts":"2025-12-08T03:52:25.309236Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-940895","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.191:2380"],"advertise-client-urls":["https://192.168.39.191:2379"]}
	
	
	==> etcd [cc59279d12df2d84984637c69019212986fad5917bf2ab0bb561ee567265fad4] <==
	{"level":"warn","ts":"2025-12-08T03:53:39.962805Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-08T03:53:39.317878Z","time spent":"644.88481ms","remote":"127.0.0.1:50708","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:918 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-12-08T03:53:42.204788Z","caller":"traceutil/trace.go:172","msg":"trace[1023242547] transaction","detail":"{read_only:false; response_revision:928; number_of_response:1; }","duration":"234.497112ms","start":"2025-12-08T03:53:41.970279Z","end":"2025-12-08T03:53:42.204777Z","steps":["trace[1023242547] 'process raft request'  (duration: 234.37213ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T03:53:42.205519Z","caller":"traceutil/trace.go:172","msg":"trace[1637569948] linearizableReadLoop","detail":"{readStateIndex:1012; appliedIndex:1012; }","duration":"209.830577ms","start":"2025-12-08T03:53:41.994761Z","end":"2025-12-08T03:53:42.204592Z","steps":["trace[1637569948] 'read index received'  (duration: 209.826364ms)","trace[1637569948] 'applied index is now lower than readState.Index'  (duration: 3.52µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-08T03:53:42.205629Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"210.855763ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-08T03:53:42.205647Z","caller":"traceutil/trace.go:172","msg":"trace[1916086867] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:928; }","duration":"210.885278ms","start":"2025-12-08T03:53:41.994757Z","end":"2025-12-08T03:53:42.205642Z","steps":["trace[1916086867] 'agreement among raft nodes before linearized reading'  (duration: 210.830314ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-08T03:53:46.494254Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":9642940159663342554,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-12-08T03:53:46.522252Z","caller":"traceutil/trace.go:172","msg":"trace[33537927] linearizableReadLoop","detail":"{readStateIndex:1014; appliedIndex:1014; }","duration":"528.304358ms","start":"2025-12-08T03:53:45.993923Z","end":"2025-12-08T03:53:46.522227Z","steps":["trace[33537927] 'read index received'  (duration: 528.29865ms)","trace[33537927] 'applied index is now lower than readState.Index'  (duration: 4.874µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-08T03:53:46.522347Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-08T03:53:45.938346Z","time spent":"583.995868ms","remote":"127.0.0.1:50472","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2025-12-08T03:53:46.522456Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"528.519045ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-08T03:53:46.522477Z","caller":"traceutil/trace.go:172","msg":"trace[364439919] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:929; }","duration":"528.551484ms","start":"2025-12-08T03:53:45.993919Z","end":"2025-12-08T03:53:46.522470Z","steps":["trace[364439919] 'agreement among raft nodes before linearized reading'  (duration: 528.465015ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-08T03:53:46.522496Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-08T03:53:45.993903Z","time spent":"528.588409ms","remote":"127.0.0.1:50750","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-12-08T03:53:46.526160Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"441.007709ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-08T03:53:46.526291Z","caller":"traceutil/trace.go:172","msg":"trace[635075203] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:929; }","duration":"441.141297ms","start":"2025-12-08T03:53:46.085141Z","end":"2025-12-08T03:53:46.526282Z","steps":["trace[635075203] 'agreement among raft nodes before linearized reading'  (duration: 440.98842ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-08T03:53:46.526353Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-08T03:53:46.084997Z","time spent":"441.347949ms","remote":"127.0.0.1:50604","response type":"/etcdserverpb.KV/Range","request count":0,"request size":24,"response count":0,"response size":28,"request content":"key:\"/registry/namespaces\" limit:1 "}
	{"level":"info","ts":"2025-12-08T03:53:46.526659Z","caller":"traceutil/trace.go:172","msg":"trace[951254325] transaction","detail":"{read_only:false; response_revision:931; number_of_response:1; }","duration":"295.183463ms","start":"2025-12-08T03:53:46.231467Z","end":"2025-12-08T03:53:46.526651Z","steps":["trace[951254325] 'process raft request'  (duration: 295.151616ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-08T03:53:46.526952Z","caller":"traceutil/trace.go:172","msg":"trace[1691667172] transaction","detail":"{read_only:false; response_revision:930; number_of_response:1; }","duration":"325.287162ms","start":"2025-12-08T03:53:46.201653Z","end":"2025-12-08T03:53:46.526940Z","steps":["trace[1691667172] 'process raft request'  (duration: 324.780679ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-08T03:53:46.527027Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-08T03:53:46.201632Z","time spent":"325.356319ms","remote":"127.0.0.1:50896","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":556,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/functional-940895\" mod_revision:916 > success:<request_put:<key:\"/registry/leases/kube-node-lease/functional-940895\" value_size:498 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/functional-940895\" > >"}
	{"level":"warn","ts":"2025-12-08T03:53:46.527273Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"261.589802ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-12-08T03:53:46.527315Z","caller":"traceutil/trace.go:172","msg":"trace[1820464428] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:931; }","duration":"261.630458ms","start":"2025-12-08T03:53:46.265677Z","end":"2025-12-08T03:53:46.527308Z","steps":["trace[1820464428] 'agreement among raft nodes before linearized reading'  (duration: 261.540703ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-08T03:53:46.527446Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"246.818717ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-08T03:53:46.527482Z","caller":"traceutil/trace.go:172","msg":"trace[404362307] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:931; }","duration":"246.852953ms","start":"2025-12-08T03:53:46.280621Z","end":"2025-12-08T03:53:46.527474Z","steps":["trace[404362307] 'agreement among raft nodes before linearized reading'  (duration: 246.80634ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-08T03:53:46.527569Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.71552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-08T03:53:46.527601Z","caller":"traceutil/trace.go:172","msg":"trace[1471564862] range","detail":"{range_begin:/registry/networkpolicies; range_end:; response_count:0; response_revision:931; }","duration":"151.749104ms","start":"2025-12-08T03:53:46.375847Z","end":"2025-12-08T03:53:46.527596Z","steps":["trace[1471564862] 'agreement among raft nodes before linearized reading'  (duration: 150.930217ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-08T03:53:46.527686Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"305.962139ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-08T03:53:46.527715Z","caller":"traceutil/trace.go:172","msg":"trace[844534358] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:931; }","duration":"305.991584ms","start":"2025-12-08T03:53:46.221719Z","end":"2025-12-08T03:53:46.527711Z","steps":["trace[844534358] 'agreement among raft nodes before linearized reading'  (duration: 305.954363ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:59:28 up 8 min,  0 users,  load average: 0.11, 0.32, 0.19
	Linux functional-940895 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [a3de100435730fda810bd3ca9f4b265ad514034612671c1f29819d1247ff0429] <==
	E1208 03:52:44.822731       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1208 03:52:44.836875       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1208 03:52:44.837045       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:44.837068       1 policy_source.go:248] refreshing policies
	I1208 03:52:44.863882       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1208 03:52:44.878882       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1208 03:52:44.913647       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1208 03:52:45.622753       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1208 03:52:46.476858       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1208 03:52:46.524591       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1208 03:52:46.557038       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1208 03:52:46.567525       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1208 03:52:48.206540       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1208 03:52:48.305354       1 controller.go:667] quota admission added evaluator for: endpoints
	I1208 03:52:48.406370       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1208 03:53:03.293585       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.105.217"}
	I1208 03:53:07.322121       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.65.127"}
	I1208 03:53:07.526544       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.96.253.112"}
	I1208 03:53:18.110776       1 controller.go:667] quota admission added evaluator for: namespaces
	I1208 03:53:18.467501       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.209.75"}
	I1208 03:53:18.493800       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.35.92"}
	I1208 03:53:19.231330       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.105.51.65"}
	E1208 03:53:46.656867       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8441->192.168.39.1:48882: use of closed network connection
	E1208 03:53:48.037655       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8441->192.168.39.1:48902: use of closed network connection
	E1208 03:53:49.467722       1 conn.go:339] Error on socket receive: read tcp 192.168.39.191:8441->192.168.39.1:48916: use of closed network connection
	
	
	==> kube-controller-manager [29f91fc32c6cbd074acb90d4829ee19fdbd522e1d62463e4c61d2e44ef0d336f] <==
	I1208 03:52:03.817225       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.817250       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.817283       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.820356       1 range_allocator.go:177] "Sending events to api server"
	I1208 03:52:03.820466       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1208 03:52:03.820494       1 shared_informer.go:370] "Waiting for caches to sync"
	I1208 03:52:03.820510       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.821488       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.821661       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.821789       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.821634       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.821639       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.821645       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.821653       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.821892       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.821626       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.821974       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.821617       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.824112       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.837088       1 shared_informer.go:370] "Waiting for caches to sync"
	I1208 03:52:03.871410       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.924301       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:03.924362       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1208 03:52:03.924367       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1208 03:52:03.937820       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [377904d89336d9a4d48877f72900ec67d14fb0f7abfb8f7ec2f896c435060d31] <==
	I1208 03:52:47.937760       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:47.937766       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:47.937778       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:47.937783       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:47.937939       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:47.938016       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:47.937742       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:47.938147       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:47.938847       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:47.940112       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:47.940120       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:47.956807       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:48.031729       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:48.031761       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1208 03:52:48.031767       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1208 03:52:48.035631       1 shared_informer.go:377] "Caches are synced"
	E1208 03:53:18.231615       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1208 03:53:18.250796       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1208 03:53:18.264122       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1208 03:53:18.276059       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1208 03:53:18.284952       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1208 03:53:18.285355       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1208 03:53:18.299380       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-5565989548\" failed with pods \"dashboard-metrics-scraper-5565989548-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1208 03:53:18.302206       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1208 03:53:18.310129       1 replica_set.go:592] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-b84665fb8\" failed with pods \"kubernetes-dashboard-b84665fb8-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [1b5d5f82e2e118aef9704e7466b5d5e0677f23bea30a34d63508c9386d4ebc59] <==
	I1208 03:52:45.277061       1 shared_informer.go:370] "Waiting for caches to sync"
	I1208 03:52:45.378460       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:45.378505       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.191"]
	E1208 03:52:45.378561       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 03:52:45.409992       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1208 03:52:45.410052       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1208 03:52:45.410080       1 server_linux.go:136] "Using iptables Proxier"
	I1208 03:52:45.418593       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 03:52:45.418816       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1208 03:52:45.418833       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 03:52:45.424583       1 config.go:200] "Starting service config controller"
	I1208 03:52:45.428909       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 03:52:45.431089       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1208 03:52:45.432757       1 config.go:106] "Starting endpoint slice config controller"
	I1208 03:52:45.432831       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 03:52:45.432848       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 03:52:45.432853       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 03:52:45.426986       1 config.go:309] "Starting node config controller"
	I1208 03:52:45.433085       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 03:52:45.433091       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1208 03:52:45.533051       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 03:52:45.533150       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [1be2f787e3a27495f4e77ae126536c8afb2629b093d7965f78bf2fcd5faff37e] <==
	I1208 03:52:01.987127       1 shared_informer.go:370] "Waiting for caches to sync"
	I1208 03:52:02.088450       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:02.088488       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.191"]
	E1208 03:52:02.088546       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 03:52:02.122345       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1208 03:52:02.122436       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1208 03:52:02.122460       1 server_linux.go:136] "Using iptables Proxier"
	I1208 03:52:02.131320       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 03:52:02.131606       1 server.go:529] "Version info" version="v1.35.0-beta.0"
	I1208 03:52:02.131636       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 03:52:02.136123       1 config.go:309] "Starting node config controller"
	I1208 03:52:02.136172       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 03:52:02.136180       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1208 03:52:02.136313       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 03:52:02.136318       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 03:52:02.136371       1 config.go:200] "Starting service config controller"
	I1208 03:52:02.136374       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 03:52:02.136383       1 config.go:106] "Starting endpoint slice config controller"
	I1208 03:52:02.136387       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 03:52:02.236870       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1208 03:52:02.236917       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 03:52:02.236932       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [06467e4fe2af89c356ac6de6c5d5ae36dea10b4f50764762a3a575620254ba69] <==
	I1208 03:51:59.575541       1 serving.go:386] Generated self-signed cert in-memory
	W1208 03:52:00.590935       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1208 03:52:00.591112       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1208 03:52:00.591148       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1208 03:52:00.591166       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1208 03:52:00.646292       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1208 03:52:00.649467       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 03:52:00.653199       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 03:52:00.653239       1 shared_informer.go:370] "Waiting for caches to sync"
	I1208 03:52:00.654451       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1208 03:52:00.653214       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1208 03:52:00.756823       1 shared_informer.go:377] "Caches are synced"
	I1208 03:52:25.240787       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1208 03:52:25.240873       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1208 03:52:25.241263       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1208 03:52:25.241458       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 03:52:25.241596       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1208 03:52:25.241654       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [1db40b4aafb30a320c57c5d305970d76e3ff2c0511fcd1363bda7c6d78a35964] <==
	I1208 03:52:40.712670       1 serving.go:386] Generated self-signed cert in-memory
	W1208 03:52:40.719813       1 authentication.go:397] Error looking up in-cluster authentication configuration: Get "https://192.168.39.191:8441/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.39.191:8441: connect: connection refused
	W1208 03:52:40.719847       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1208 03:52:40.719854       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1208 03:52:40.730048       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-beta.0"
	I1208 03:52:40.730079       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 03:52:40.731787       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1208 03:52:40.731883       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 03:52:40.731907       1 shared_informer.go:370] "Waiting for caches to sync"
	I1208 03:52:40.731921       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1208 03:52:44.716765       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\""
	E1208 03:52:44.738877       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1208 03:52:44.804846       1 reflector.go:429] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope"
	I1208 03:52:50.033133       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 08 03:58:31 functional-940895 kubelet[6143]: E1208 03:58:31.998777    6143 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765166311998440382  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240578}  inodes_used:{value:111}}"
	Dec 08 03:58:31 functional-940895 kubelet[6143]: E1208 03:58:31.998800    6143 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765166311998440382  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240578}  inodes_used:{value:111}}"
	Dec 08 03:58:32 functional-940895 kubelet[6143]: E1208 03:58:32.770985    6143 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-b84665fb8-6b4qd" containerName="kubernetes-dashboard"
	Dec 08 03:58:36 functional-940895 kubelet[6143]: E1208 03:58:36.770537    6143 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-5565989548-h2psq" containerName="dashboard-metrics-scraper"
	Dec 08 03:58:41 functional-940895 kubelet[6143]: E1208 03:58:41.894266    6143 manager.go:1119] Failed to create existing container: /kubepods/burstable/podf9a434c2-3d9b-4e4c-af7c-01916f966225/crio-3a566c3d6d7784b741bc3f122cd0bb43708bfc28bbefdeba209fac9878b03236: Error finding container 3a566c3d6d7784b741bc3f122cd0bb43708bfc28bbefdeba209fac9878b03236: Status 404 returned error can't find the container with id 3a566c3d6d7784b741bc3f122cd0bb43708bfc28bbefdeba209fac9878b03236
	Dec 08 03:58:41 functional-940895 kubelet[6143]: E1208 03:58:41.894885    6143 manager.go:1119] Failed to create existing container: /kubepods/burstable/pod1204570cb9e65bbd2704f13187cf1955/crio-e4177c23065acb32e40d0fd44d5581635997419daa9ad4f49633ea95c4517485: Error finding container e4177c23065acb32e40d0fd44d5581635997419daa9ad4f49633ea95c4517485: Status 404 returned error can't find the container with id e4177c23065acb32e40d0fd44d5581635997419daa9ad4f49633ea95c4517485
	Dec 08 03:58:41 functional-940895 kubelet[6143]: E1208 03:58:41.895277    6143 manager.go:1119] Failed to create existing container: /kubepods/burstable/podcb382bdd7480c5e71ebaced46966eb5d/crio-daf422c4ddbee91100a06ce58178f9fd0c58711a8ec8fbbf0e38a9ed32561e62: Error finding container daf422c4ddbee91100a06ce58178f9fd0c58711a8ec8fbbf0e38a9ed32561e62: Status 404 returned error can't find the container with id daf422c4ddbee91100a06ce58178f9fd0c58711a8ec8fbbf0e38a9ed32561e62
	Dec 08 03:58:41 functional-940895 kubelet[6143]: E1208 03:58:41.895789    6143 manager.go:1119] Failed to create existing container: /kubepods/burstable/poda0587a6a24801aeaf5c2938faba7454f/crio-4391ed406327db858ddc176c7512612dee14a679f5c43ef7a1de0c571b3addd4: Error finding container 4391ed406327db858ddc176c7512612dee14a679f5c43ef7a1de0c571b3addd4: Status 404 returned error can't find the container with id 4391ed406327db858ddc176c7512612dee14a679f5c43ef7a1de0c571b3addd4
	Dec 08 03:58:41 functional-940895 kubelet[6143]: E1208 03:58:41.896072    6143 manager.go:1119] Failed to create existing container: /kubepods/besteffort/pod33c879ad-b4e4-4102-8abd-6a96ca44a096/crio-349f8d21030dc86e10db43ee0c7cf72074681991df597a1b953e9532a3641789: Error finding container 349f8d21030dc86e10db43ee0c7cf72074681991df597a1b953e9532a3641789: Status 404 returned error can't find the container with id 349f8d21030dc86e10db43ee0c7cf72074681991df597a1b953e9532a3641789
	Dec 08 03:58:42 functional-940895 kubelet[6143]: E1208 03:58:42.001241    6143 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765166322000613643  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240578}  inodes_used:{value:111}}"
	Dec 08 03:58:42 functional-940895 kubelet[6143]: E1208 03:58:42.001268    6143 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765166322000613643  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240578}  inodes_used:{value:111}}"
	Dec 08 03:58:49 functional-940895 kubelet[6143]: E1208 03:58:49.774233    6143 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-functional-940895" containerName="kube-scheduler"
	Dec 08 03:58:52 functional-940895 kubelet[6143]: E1208 03:58:52.003606    6143 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765166332003189704  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240578}  inodes_used:{value:111}}"
	Dec 08 03:58:52 functional-940895 kubelet[6143]: E1208 03:58:52.003645    6143 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765166332003189704  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240578}  inodes_used:{value:111}}"
	Dec 08 03:59:02 functional-940895 kubelet[6143]: E1208 03:59:02.006507    6143 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765166342005996378  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240578}  inodes_used:{value:111}}"
	Dec 08 03:59:02 functional-940895 kubelet[6143]: E1208 03:59:02.006533    6143 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765166342005996378  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240578}  inodes_used:{value:111}}"
	Dec 08 03:59:12 functional-940895 kubelet[6143]: E1208 03:59:12.009882    6143 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765166352008603221  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240578}  inodes_used:{value:111}}"
	Dec 08 03:59:12 functional-940895 kubelet[6143]: E1208 03:59:12.009907    6143 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765166352008603221  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240578}  inodes_used:{value:111}}"
	Dec 08 03:59:14 functional-940895 kubelet[6143]: E1208 03:59:14.627826    6143 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 08 03:59:14 functional-940895 kubelet[6143]: E1208 03:59:14.627888    6143 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 08 03:59:14 functional-940895 kubelet[6143]: E1208 03:59:14.628129    6143 kuberuntime_manager.go:1664] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(bdbb7528-256a-4d9c-9641-dcdb820d6496): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 08 03:59:14 functional-940895 kubelet[6143]: E1208 03:59:14.628188    6143 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="bdbb7528-256a-4d9c-9641-dcdb820d6496"
	Dec 08 03:59:22 functional-940895 kubelet[6143]: E1208 03:59:22.013532    6143 eviction_manager.go:264] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765166362012961054  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240578}  inodes_used:{value:111}}"
	Dec 08 03:59:22 functional-940895 kubelet[6143]: E1208 03:59:22.013551    6143 eviction_manager.go:217] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765166362012961054  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:240578}  inodes_used:{value:111}}"
	Dec 08 03:59:24 functional-940895 kubelet[6143]: E1208 03:59:24.770873    6143 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-k5vpk" containerName="coredns"
	
	
	==> kubernetes-dashboard [8c0c9332d2aa76dc350296b2615bfe92564f7888d3fe19f6525083a4848df7d4] <==
	2025/12/08 03:53:29 Using namespace: kubernetes-dashboard
	2025/12/08 03:53:29 Using in-cluster config to connect to apiserver
	2025/12/08 03:53:29 Using secret token for csrf signing
	2025/12/08 03:53:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/08 03:53:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/08 03:53:29 Successful initial request to the apiserver, version: v1.35.0-beta.0
	2025/12/08 03:53:29 Generating JWE encryption key
	2025/12/08 03:53:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/08 03:53:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/08 03:53:29 Initializing JWE encryption key from synchronized object
	2025/12/08 03:53:29 Creating in-cluster Sidecar client
	2025/12/08 03:53:29 Successful request to sidecar
	2025/12/08 03:53:29 Serving insecurely on HTTP port: 9090
	2025/12/08 03:53:29 Starting overwatch
	
	
	==> storage-provisioner [ae2b887afda86b5e5e5220a7582295a1408d813af78a190c657815f94bc6f492] <==
	W1208 03:59:04.034571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:06.037340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:06.041926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:08.045488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:08.054375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:10.058573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:10.063719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:12.066801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:12.071345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:14.074496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:14.079741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:16.083360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:16.088237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:18.092265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:18.098629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:20.102271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:20.110178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:22.112924       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:22.117984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:24.121129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:24.129958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:26.133652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:26.138721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:28.142887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1208 03:59:28.151698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b64587746e4e823a964852ed0f0e5e8f3e95684edcb4c3129cfaff31dea1d582] <==
	I1208 03:52:40.374509       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1208 03:52:40.379256       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-940895 -n functional-940895
helpers_test.go:269: (dbg) Run:  kubectl --context functional-940895 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount sp-pod
helpers_test.go:282: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-940895 describe pod busybox-mount sp-pod
helpers_test.go:290: (dbg) kubectl --context functional-940895 describe pod busybox-mount sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-940895/192.168.39.191
	Start Time:       Mon, 08 Dec 2025 03:53:09 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://292a31de7c9f7ba74e0cbbb56dfe01ee58149c2ed639d990a88fac530b98a58e
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 08 Dec 2025 03:53:14 +0000
	      Finished:     Mon, 08 Dec 2025 03:53:14 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p7wtw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-p7wtw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m20s  default-scheduler  Successfully assigned default/busybox-mount to functional-940895
	  Normal  Pulling    6m19s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6m15s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.588s (4.308s including waiting). Image size: 4631262 bytes.
	  Normal  Created    6m15s  kubelet            Container created
	  Normal  Started    6m15s  kubelet            Container started
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-940895/192.168.39.191
	Start Time:       Mon, 08 Dec 2025 03:53:26 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-df9p4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-df9p4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m3s                 default-scheduler  Successfully assigned default/sp-pod to functional-940895
	  Warning  Failed     5m16s                kubelet            Failed to pull image "docker.io/nginx": copying system image from manifest list: determining manifest MIME type for docker://nginx:latest: reading manifest sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    47s (x5 over 6m)     kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     15s (x5 over 5m16s)  kubelet            Error: ErrImagePull
	  Warning  Failed     15s (x4 over 4m28s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    1s (x11 over 5m16s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     1s (x11 over 5m16s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (369.63s)

                                                
                                    
x
+
TestPreload (150.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-308311 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1208 04:38:07.333657  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-308311 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m34.024336066s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-308311 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-308311 image pull gcr.io/k8s-minikube/busybox: (3.539722305s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-308311
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-308311: (8.202644769s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-308311 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1208 04:39:36.253765  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-308311 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (42.327956885s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-308311 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-08 04:39:47.629892449 +0000 UTC m=+3649.934214909
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-308311 -n test-preload-308311
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-308311 logs -n 25
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-012284 ssh -n multinode-012284-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-012284     │ jenkins │ v1.37.0 │ 08 Dec 25 04:25 UTC │ 08 Dec 25 04:25 UTC │
	│ ssh     │ multinode-012284 ssh -n multinode-012284 sudo cat /home/docker/cp-test_multinode-012284-m03_multinode-012284.txt                                          │ multinode-012284     │ jenkins │ v1.37.0 │ 08 Dec 25 04:25 UTC │ 08 Dec 25 04:25 UTC │
	│ cp      │ multinode-012284 cp multinode-012284-m03:/home/docker/cp-test.txt multinode-012284-m02:/home/docker/cp-test_multinode-012284-m03_multinode-012284-m02.txt │ multinode-012284     │ jenkins │ v1.37.0 │ 08 Dec 25 04:25 UTC │ 08 Dec 25 04:25 UTC │
	│ ssh     │ multinode-012284 ssh -n multinode-012284-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-012284     │ jenkins │ v1.37.0 │ 08 Dec 25 04:25 UTC │ 08 Dec 25 04:25 UTC │
	│ ssh     │ multinode-012284 ssh -n multinode-012284-m02 sudo cat /home/docker/cp-test_multinode-012284-m03_multinode-012284-m02.txt                                  │ multinode-012284     │ jenkins │ v1.37.0 │ 08 Dec 25 04:25 UTC │ 08 Dec 25 04:25 UTC │
	│ node    │ multinode-012284 node stop m03                                                                                                                            │ multinode-012284     │ jenkins │ v1.37.0 │ 08 Dec 25 04:25 UTC │ 08 Dec 25 04:25 UTC │
	│ node    │ multinode-012284 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-012284     │ jenkins │ v1.37.0 │ 08 Dec 25 04:25 UTC │ 08 Dec 25 04:26 UTC │
	│ node    │ list -p multinode-012284                                                                                                                                  │ multinode-012284     │ jenkins │ v1.37.0 │ 08 Dec 25 04:26 UTC │                     │
	│ stop    │ -p multinode-012284                                                                                                                                       │ multinode-012284     │ jenkins │ v1.37.0 │ 08 Dec 25 04:26 UTC │ 08 Dec 25 04:29 UTC │
	│ start   │ -p multinode-012284 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-012284     │ jenkins │ v1.37.0 │ 08 Dec 25 04:29 UTC │ 08 Dec 25 04:31 UTC │
	│ node    │ list -p multinode-012284                                                                                                                                  │ multinode-012284     │ jenkins │ v1.37.0 │ 08 Dec 25 04:31 UTC │                     │
	│ node    │ multinode-012284 node delete m03                                                                                                                          │ multinode-012284     │ jenkins │ v1.37.0 │ 08 Dec 25 04:31 UTC │ 08 Dec 25 04:31 UTC │
	│ stop    │ multinode-012284 stop                                                                                                                                     │ multinode-012284     │ jenkins │ v1.37.0 │ 08 Dec 25 04:31 UTC │ 08 Dec 25 04:34 UTC │
	│ start   │ -p multinode-012284 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-012284     │ jenkins │ v1.37.0 │ 08 Dec 25 04:34 UTC │ 08 Dec 25 04:36 UTC │
	│ node    │ list -p multinode-012284                                                                                                                                  │ multinode-012284     │ jenkins │ v1.37.0 │ 08 Dec 25 04:36 UTC │                     │
	│ start   │ -p multinode-012284-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-012284-m02 │ jenkins │ v1.37.0 │ 08 Dec 25 04:36 UTC │                     │
	│ start   │ -p multinode-012284-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-012284-m03 │ jenkins │ v1.37.0 │ 08 Dec 25 04:36 UTC │ 08 Dec 25 04:37 UTC │
	│ node    │ add -p multinode-012284                                                                                                                                   │ multinode-012284     │ jenkins │ v1.37.0 │ 08 Dec 25 04:37 UTC │                     │
	│ delete  │ -p multinode-012284-m03                                                                                                                                   │ multinode-012284-m03 │ jenkins │ v1.37.0 │ 08 Dec 25 04:37 UTC │ 08 Dec 25 04:37 UTC │
	│ delete  │ -p multinode-012284                                                                                                                                       │ multinode-012284     │ jenkins │ v1.37.0 │ 08 Dec 25 04:37 UTC │ 08 Dec 25 04:37 UTC │
	│ start   │ -p test-preload-308311 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-308311  │ jenkins │ v1.37.0 │ 08 Dec 25 04:37 UTC │ 08 Dec 25 04:38 UTC │
	│ image   │ test-preload-308311 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-308311  │ jenkins │ v1.37.0 │ 08 Dec 25 04:38 UTC │ 08 Dec 25 04:38 UTC │
	│ stop    │ -p test-preload-308311                                                                                                                                    │ test-preload-308311  │ jenkins │ v1.37.0 │ 08 Dec 25 04:38 UTC │ 08 Dec 25 04:39 UTC │
	│ start   │ -p test-preload-308311 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-308311  │ jenkins │ v1.37.0 │ 08 Dec 25 04:39 UTC │ 08 Dec 25 04:39 UTC │
	│ image   │ test-preload-308311 image list                                                                                                                            │ test-preload-308311  │ jenkins │ v1.37.0 │ 08 Dec 25 04:39 UTC │ 08 Dec 25 04:39 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 04:39:05
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 04:39:05.169447  156821 out.go:360] Setting OutFile to fd 1 ...
	I1208 04:39:05.169753  156821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:39:05.169765  156821 out.go:374] Setting ErrFile to fd 2...
	I1208 04:39:05.169769  156821 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:39:05.169981  156821 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	I1208 04:39:05.170445  156821 out.go:368] Setting JSON to false
	I1208 04:39:05.171477  156821 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4889,"bootTime":1765163856,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 04:39:05.171546  156821 start.go:143] virtualization: kvm guest
	I1208 04:39:05.173632  156821 out.go:179] * [test-preload-308311] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1208 04:39:05.174952  156821 notify.go:221] Checking for updates...
	I1208 04:39:05.174985  156821 out.go:179]   - MINIKUBE_LOCATION=21409
	I1208 04:39:05.176161  156821 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 04:39:05.177512  156821 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	I1208 04:39:05.178763  156821 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	I1208 04:39:05.179825  156821 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 04:39:05.180819  156821 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 04:39:05.182382  156821 config.go:182] Loaded profile config "test-preload-308311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 04:39:05.182926  156821 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 04:39:05.217947  156821 out.go:179] * Using the kvm2 driver based on existing profile
	I1208 04:39:05.218992  156821 start.go:309] selected driver: kvm2
	I1208 04:39:05.219005  156821 start.go:927] validating driver "kvm2" against &{Name:test-preload-308311 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.2 ClusterName:test-preload-308311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 04:39:05.219159  156821 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 04:39:05.220115  156821 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 04:39:05.220157  156821 cni.go:84] Creating CNI manager for ""
	I1208 04:39:05.220234  156821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 04:39:05.220305  156821 start.go:353] cluster config:
	{Name:test-preload-308311 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-308311 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 04:39:05.220420  156821 iso.go:125] acquiring lock: {Name:mkd550ce23b107beb8be7edee8182e09aac2818e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 04:39:05.222406  156821 out.go:179] * Starting "test-preload-308311" primary control-plane node in "test-preload-308311" cluster
	I1208 04:39:05.223394  156821 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 04:39:05.223420  156821 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21409-125868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1208 04:39:05.223430  156821 cache.go:65] Caching tarball of preloaded images
	I1208 04:39:05.223516  156821 preload.go:238] Found /home/jenkins/minikube-integration/21409-125868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1208 04:39:05.223526  156821 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1208 04:39:05.223611  156821 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/test-preload-308311/config.json ...
	I1208 04:39:05.223794  156821 start.go:360] acquireMachinesLock for test-preload-308311: {Name:mka95432fbbe0b4b61b444ff6ef3750992988c0d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1208 04:39:05.223835  156821 start.go:364] duration metric: took 22.768µs to acquireMachinesLock for "test-preload-308311"
	I1208 04:39:05.223849  156821 start.go:96] Skipping create...Using existing machine configuration
	I1208 04:39:05.223858  156821 fix.go:54] fixHost starting: 
	I1208 04:39:05.225535  156821 fix.go:112] recreateIfNeeded on test-preload-308311: state=Stopped err=<nil>
	W1208 04:39:05.225566  156821 fix.go:138] unexpected machine state, will restart: <nil>
	I1208 04:39:05.227041  156821 out.go:252] * Restarting existing kvm2 VM for "test-preload-308311" ...
	I1208 04:39:05.227069  156821 main.go:143] libmachine: starting domain...
	I1208 04:39:05.227077  156821 main.go:143] libmachine: ensuring networks are active...
	I1208 04:39:05.227853  156821 main.go:143] libmachine: Ensuring network default is active
	I1208 04:39:05.228248  156821 main.go:143] libmachine: Ensuring network mk-test-preload-308311 is active
	I1208 04:39:05.228722  156821 main.go:143] libmachine: getting domain XML...
	I1208 04:39:05.229791  156821 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-308311</name>
	  <uuid>d5d0c01c-4dee-420f-94db-317e89f92d90</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21409-125868/.minikube/machines/test-preload-308311/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21409-125868/.minikube/machines/test-preload-308311/test-preload-308311.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:f9:ca:8b'/>
	      <source network='mk-test-preload-308311'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:b5:35:09'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1208 04:39:06.505039  156821 main.go:143] libmachine: waiting for domain to start...
	I1208 04:39:06.506487  156821 main.go:143] libmachine: domain is now running
	I1208 04:39:06.506517  156821 main.go:143] libmachine: waiting for IP...
	I1208 04:39:06.507335  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:06.507893  156821 main.go:143] libmachine: domain test-preload-308311 has current primary IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:06.507928  156821 main.go:143] libmachine: found domain IP: 192.168.39.42
	I1208 04:39:06.507936  156821 main.go:143] libmachine: reserving static IP address...
	I1208 04:39:06.508392  156821 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-308311", mac: "52:54:00:f9:ca:8b", ip: "192.168.39.42"} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:37:34 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:06.508422  156821 main.go:143] libmachine: skip adding static IP to network mk-test-preload-308311 - found existing host DHCP lease matching {name: "test-preload-308311", mac: "52:54:00:f9:ca:8b", ip: "192.168.39.42"}
	I1208 04:39:06.508430  156821 main.go:143] libmachine: reserved static IP address 192.168.39.42 for domain test-preload-308311
	I1208 04:39:06.508436  156821 main.go:143] libmachine: waiting for SSH...
	I1208 04:39:06.508443  156821 main.go:143] libmachine: Getting to WaitForSSH function...
	I1208 04:39:06.510708  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:06.511053  156821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:ca:8b", ip: ""} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:37:34 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:06.511073  156821 main.go:143] libmachine: domain test-preload-308311 has defined IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:06.511240  156821 main.go:143] libmachine: Using SSH client type: native
	I1208 04:39:06.511521  156821 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1208 04:39:06.511533  156821 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1208 04:39:09.607182  156821 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.42:22: connect: no route to host
	I1208 04:39:15.687226  156821 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.42:22: connect: no route to host
	I1208 04:39:18.806092  156821 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 04:39:18.809878  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:18.810365  156821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:ca:8b", ip: ""} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:39:16 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:18.810405  156821 main.go:143] libmachine: domain test-preload-308311 has defined IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:18.810648  156821 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/test-preload-308311/config.json ...
	I1208 04:39:18.810871  156821 machine.go:94] provisionDockerMachine start ...
	I1208 04:39:18.813136  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:18.813470  156821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:ca:8b", ip: ""} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:39:16 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:18.813497  156821 main.go:143] libmachine: domain test-preload-308311 has defined IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:18.813650  156821 main.go:143] libmachine: Using SSH client type: native
	I1208 04:39:18.813886  156821 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1208 04:39:18.813919  156821 main.go:143] libmachine: About to run SSH command:
	hostname
	I1208 04:39:18.927709  156821 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1208 04:39:18.927743  156821 buildroot.go:166] provisioning hostname "test-preload-308311"
	I1208 04:39:18.930934  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:18.931318  156821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:ca:8b", ip: ""} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:39:16 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:18.931342  156821 main.go:143] libmachine: domain test-preload-308311 has defined IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:18.931487  156821 main.go:143] libmachine: Using SSH client type: native
	I1208 04:39:18.931715  156821 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1208 04:39:18.931736  156821 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-308311 && echo "test-preload-308311" | sudo tee /etc/hostname
	I1208 04:39:19.061341  156821 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-308311
	
	I1208 04:39:19.064297  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.064725  156821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:ca:8b", ip: ""} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:39:16 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:19.064770  156821 main.go:143] libmachine: domain test-preload-308311 has defined IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.064962  156821 main.go:143] libmachine: Using SSH client type: native
	I1208 04:39:19.065174  156821 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1208 04:39:19.065189  156821 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-308311' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-308311/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-308311' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1208 04:39:19.188805  156821 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1208 04:39:19.188841  156821 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21409-125868/.minikube CaCertPath:/home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21409-125868/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21409-125868/.minikube}
	I1208 04:39:19.188913  156821 buildroot.go:174] setting up certificates
	I1208 04:39:19.188926  156821 provision.go:84] configureAuth start
	I1208 04:39:19.191930  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.192344  156821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:ca:8b", ip: ""} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:39:16 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:19.192368  156821 main.go:143] libmachine: domain test-preload-308311 has defined IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.194683  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.195026  156821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:ca:8b", ip: ""} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:39:16 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:19.195056  156821 main.go:143] libmachine: domain test-preload-308311 has defined IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.195169  156821 provision.go:143] copyHostCerts
	I1208 04:39:19.195220  156821 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-125868/.minikube/key.pem, removing ...
	I1208 04:39:19.195229  156821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-125868/.minikube/key.pem
	I1208 04:39:19.195309  156821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21409-125868/.minikube/key.pem (1675 bytes)
	I1208 04:39:19.195412  156821 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-125868/.minikube/ca.pem, removing ...
	I1208 04:39:19.195421  156821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-125868/.minikube/ca.pem
	I1208 04:39:19.195450  156821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21409-125868/.minikube/ca.pem (1078 bytes)
	I1208 04:39:19.195527  156821 exec_runner.go:144] found /home/jenkins/minikube-integration/21409-125868/.minikube/cert.pem, removing ...
	I1208 04:39:19.195534  156821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21409-125868/.minikube/cert.pem
	I1208 04:39:19.195558  156821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21409-125868/.minikube/cert.pem (1123 bytes)
	I1208 04:39:19.195620  156821 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21409-125868/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca-key.pem org=jenkins.test-preload-308311 san=[127.0.0.1 192.168.39.42 localhost minikube test-preload-308311]
	I1208 04:39:19.297255  156821 provision.go:177] copyRemoteCerts
	I1208 04:39:19.297314  156821 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1208 04:39:19.299946  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.300329  156821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:ca:8b", ip: ""} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:39:16 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:19.300356  156821 main.go:143] libmachine: domain test-preload-308311 has defined IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.300504  156821 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/test-preload-308311/id_rsa Username:docker}
	I1208 04:39:19.387621  156821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1208 04:39:19.415397  156821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1208 04:39:19.442452  156821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1208 04:39:19.469001  156821 provision.go:87] duration metric: took 280.055731ms to configureAuth
	I1208 04:39:19.469038  156821 buildroot.go:189] setting minikube options for container-runtime
	I1208 04:39:19.469256  156821 config.go:182] Loaded profile config "test-preload-308311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 04:39:19.472006  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.472343  156821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:ca:8b", ip: ""} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:39:16 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:19.472371  156821 main.go:143] libmachine: domain test-preload-308311 has defined IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.472539  156821 main.go:143] libmachine: Using SSH client type: native
	I1208 04:39:19.472756  156821 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1208 04:39:19.472777  156821 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1208 04:39:19.716928  156821 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1208 04:39:19.716957  156821 machine.go:97] duration metric: took 906.070864ms to provisionDockerMachine
	I1208 04:39:19.716970  156821 start.go:293] postStartSetup for "test-preload-308311" (driver="kvm2")
	I1208 04:39:19.716980  156821 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1208 04:39:19.717036  156821 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1208 04:39:19.720018  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.720451  156821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:ca:8b", ip: ""} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:39:16 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:19.720478  156821 main.go:143] libmachine: domain test-preload-308311 has defined IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.720641  156821 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/test-preload-308311/id_rsa Username:docker}
	I1208 04:39:19.809318  156821 ssh_runner.go:195] Run: cat /etc/os-release
	I1208 04:39:19.814010  156821 info.go:137] Remote host: Buildroot 2025.02
	I1208 04:39:19.814038  156821 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-125868/.minikube/addons for local assets ...
	I1208 04:39:19.814125  156821 filesync.go:126] Scanning /home/jenkins/minikube-integration/21409-125868/.minikube/files for local assets ...
	I1208 04:39:19.814226  156821 filesync.go:149] local asset: /home/jenkins/minikube-integration/21409-125868/.minikube/files/etc/ssl/certs/1299002.pem -> 1299002.pem in /etc/ssl/certs
	I1208 04:39:19.814351  156821 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1208 04:39:19.829795  156821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/files/etc/ssl/certs/1299002.pem --> /etc/ssl/certs/1299002.pem (1708 bytes)
	I1208 04:39:19.857891  156821 start.go:296] duration metric: took 140.906037ms for postStartSetup
	I1208 04:39:19.857952  156821 fix.go:56] duration metric: took 14.634092601s for fixHost
	I1208 04:39:19.860834  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.861294  156821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:ca:8b", ip: ""} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:39:16 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:19.861323  156821 main.go:143] libmachine: domain test-preload-308311 has defined IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.861517  156821 main.go:143] libmachine: Using SSH client type: native
	I1208 04:39:19.861798  156821 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1208 04:39:19.861812  156821 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1208 04:39:19.974881  156821 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765168759.931322786
	
	I1208 04:39:19.974926  156821 fix.go:216] guest clock: 1765168759.931322786
	I1208 04:39:19.974939  156821 fix.go:229] Guest: 2025-12-08 04:39:19.931322786 +0000 UTC Remote: 2025-12-08 04:39:19.857956635 +0000 UTC m=+14.738433982 (delta=73.366151ms)
	I1208 04:39:19.974960  156821 fix.go:200] guest clock delta is within tolerance: 73.366151ms
	I1208 04:39:19.974965  156821 start.go:83] releasing machines lock for "test-preload-308311", held for 14.751122548s
	I1208 04:39:19.977684  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.978167  156821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:ca:8b", ip: ""} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:39:16 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:19.978198  156821 main.go:143] libmachine: domain test-preload-308311 has defined IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.978706  156821 ssh_runner.go:195] Run: cat /version.json
	I1208 04:39:19.978780  156821 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1208 04:39:19.981839  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.981979  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.982231  156821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:ca:8b", ip: ""} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:39:16 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:19.982260  156821 main.go:143] libmachine: domain test-preload-308311 has defined IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.982380  156821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:ca:8b", ip: ""} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:39:16 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:19.982408  156821 main.go:143] libmachine: domain test-preload-308311 has defined IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:19.982436  156821 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/test-preload-308311/id_rsa Username:docker}
	I1208 04:39:19.982625  156821 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/test-preload-308311/id_rsa Username:docker}
	I1208 04:39:20.064062  156821 ssh_runner.go:195] Run: systemctl --version
	I1208 04:39:20.088884  156821 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1208 04:39:20.230556  156821 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1208 04:39:20.237184  156821 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1208 04:39:20.237251  156821 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1208 04:39:20.255675  156821 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1208 04:39:20.255697  156821 start.go:496] detecting cgroup driver to use...
	I1208 04:39:20.255763  156821 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1208 04:39:20.273963  156821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1208 04:39:20.288937  156821 docker.go:218] disabling cri-docker service (if available) ...
	I1208 04:39:20.289001  156821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1208 04:39:20.305048  156821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1208 04:39:20.319631  156821 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1208 04:39:20.455464  156821 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1208 04:39:20.663785  156821 docker.go:234] disabling docker service ...
	I1208 04:39:20.663860  156821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1208 04:39:20.680121  156821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1208 04:39:20.694028  156821 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1208 04:39:20.837790  156821 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1208 04:39:20.974568  156821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1208 04:39:20.990501  156821 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1208 04:39:21.013049  156821 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1208 04:39:21.013150  156821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 04:39:21.025543  156821 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1208 04:39:21.025611  156821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 04:39:21.038175  156821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 04:39:21.050762  156821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 04:39:21.063558  156821 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1208 04:39:21.076788  156821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 04:39:21.090178  156821 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 04:39:21.111039  156821 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1208 04:39:21.123992  156821 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1208 04:39:21.134726  156821 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1208 04:39:21.134816  156821 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1208 04:39:21.155252  156821 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1208 04:39:21.167702  156821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 04:39:21.305505  156821 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1208 04:39:21.414180  156821 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1208 04:39:21.414263  156821 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1208 04:39:21.419469  156821 start.go:564] Will wait 60s for crictl version
	I1208 04:39:21.419540  156821 ssh_runner.go:195] Run: which crictl
	I1208 04:39:21.423411  156821 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1208 04:39:21.454660  156821 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1208 04:39:21.454742  156821 ssh_runner.go:195] Run: crio --version
	I1208 04:39:21.483597  156821 ssh_runner.go:195] Run: crio --version
	I1208 04:39:21.513793  156821 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1208 04:39:21.517478  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:21.517871  156821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:ca:8b", ip: ""} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:39:16 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:21.517918  156821 main.go:143] libmachine: domain test-preload-308311 has defined IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:21.518082  156821 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1208 04:39:21.522619  156821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 04:39:21.537626  156821 kubeadm.go:884] updating cluster {Name:test-preload-308311 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:test-preload-308311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1208 04:39:21.537745  156821 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 04:39:21.537791  156821 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 04:39:21.572539  156821 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1208 04:39:21.572622  156821 ssh_runner.go:195] Run: which lz4
	I1208 04:39:21.577043  156821 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1208 04:39:21.581656  156821 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1208 04:39:21.581705  156821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1208 04:39:22.774243  156821 crio.go:462] duration metric: took 1.19723595s to copy over tarball
	I1208 04:39:22.774324  156821 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1208 04:39:24.213533  156821 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.439177364s)
	I1208 04:39:24.213559  156821 crio.go:469] duration metric: took 1.439283355s to extract the tarball
	I1208 04:39:24.213566  156821 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1208 04:39:24.249371  156821 ssh_runner.go:195] Run: sudo crictl images --output json
	I1208 04:39:24.287037  156821 crio.go:514] all images are preloaded for cri-o runtime.
	I1208 04:39:24.287074  156821 cache_images.go:86] Images are preloaded, skipping loading
	I1208 04:39:24.287082  156821 kubeadm.go:935] updating node { 192.168.39.42 8443 v1.34.2 crio true true} ...
	I1208 04:39:24.287190  156821 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-308311 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-308311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1208 04:39:24.287274  156821 ssh_runner.go:195] Run: crio config
	I1208 04:39:24.331685  156821 cni.go:84] Creating CNI manager for ""
	I1208 04:39:24.331715  156821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 04:39:24.331738  156821 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1208 04:39:24.331761  156821 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.42 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-308311 NodeName:test-preload-308311 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.42"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.42 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1208 04:39:24.331928  156821 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.42
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-308311"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.42"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.42"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1208 04:39:24.332018  156821 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1208 04:39:24.343677  156821 binaries.go:51] Found k8s binaries, skipping transfer
	I1208 04:39:24.343739  156821 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1208 04:39:24.354819  156821 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1208 04:39:24.373688  156821 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1208 04:39:24.392246  156821 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1208 04:39:24.411403  156821 ssh_runner.go:195] Run: grep 192.168.39.42	control-plane.minikube.internal$ /etc/hosts
	I1208 04:39:24.415247  156821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.42	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1208 04:39:24.428678  156821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 04:39:24.563628  156821 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 04:39:24.582980  156821 certs.go:69] Setting up /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/test-preload-308311 for IP: 192.168.39.42
	I1208 04:39:24.583003  156821 certs.go:195] generating shared ca certs ...
	I1208 04:39:24.583020  156821 certs.go:227] acquiring lock for ca certs: {Name:mkde290f016452b47757f4047e34e65b6d895da1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 04:39:24.583177  156821 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21409-125868/.minikube/ca.key
	I1208 04:39:24.583215  156821 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21409-125868/.minikube/proxy-client-ca.key
	I1208 04:39:24.583226  156821 certs.go:257] generating profile certs ...
	I1208 04:39:24.583309  156821 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/test-preload-308311/client.key
	I1208 04:39:24.583370  156821 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/test-preload-308311/apiserver.key.9231959c
	I1208 04:39:24.583408  156821 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/test-preload-308311/proxy-client.key
	I1208 04:39:24.583529  156821 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/129900.pem (1338 bytes)
	W1208 04:39:24.583560  156821 certs.go:480] ignoring /home/jenkins/minikube-integration/21409-125868/.minikube/certs/129900_empty.pem, impossibly tiny 0 bytes
	I1208 04:39:24.583575  156821 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca-key.pem (1675 bytes)
	I1208 04:39:24.583607  156821 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/ca.pem (1078 bytes)
	I1208 04:39:24.583630  156821 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/cert.pem (1123 bytes)
	I1208 04:39:24.583652  156821 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-125868/.minikube/certs/key.pem (1675 bytes)
	I1208 04:39:24.583701  156821 certs.go:484] found cert: /home/jenkins/minikube-integration/21409-125868/.minikube/files/etc/ssl/certs/1299002.pem (1708 bytes)
	I1208 04:39:24.584394  156821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1208 04:39:24.622561  156821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1208 04:39:24.663850  156821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1208 04:39:24.691630  156821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1208 04:39:24.718641  156821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/test-preload-308311/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1208 04:39:24.745770  156821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/test-preload-308311/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1208 04:39:24.772667  156821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/test-preload-308311/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1208 04:39:24.799941  156821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/test-preload-308311/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1208 04:39:24.826773  156821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/files/etc/ssl/certs/1299002.pem --> /usr/share/ca-certificates/1299002.pem (1708 bytes)
	I1208 04:39:24.853734  156821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1208 04:39:24.880508  156821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21409-125868/.minikube/certs/129900.pem --> /usr/share/ca-certificates/129900.pem (1338 bytes)
	I1208 04:39:24.907477  156821 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1208 04:39:24.926159  156821 ssh_runner.go:195] Run: openssl version
	I1208 04:39:24.932060  156821 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/1299002.pem
	I1208 04:39:24.942361  156821 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/1299002.pem /etc/ssl/certs/1299002.pem
	I1208 04:39:24.952837  156821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1299002.pem
	I1208 04:39:24.957673  156821 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  8 03:50 /usr/share/ca-certificates/1299002.pem
	I1208 04:39:24.957749  156821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1299002.pem
	I1208 04:39:24.964613  156821 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1208 04:39:24.975026  156821 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/1299002.pem /etc/ssl/certs/3ec20f2e.0
	I1208 04:39:24.985345  156821 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1208 04:39:24.995736  156821 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1208 04:39:25.006295  156821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1208 04:39:25.011127  156821 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  8 03:40 /usr/share/ca-certificates/minikubeCA.pem
	I1208 04:39:25.011172  156821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1208 04:39:25.017958  156821 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1208 04:39:25.028599  156821 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1208 04:39:25.039238  156821 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/129900.pem
	I1208 04:39:25.049768  156821 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/129900.pem /etc/ssl/certs/129900.pem
	I1208 04:39:25.060394  156821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129900.pem
	I1208 04:39:25.066497  156821 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  8 03:50 /usr/share/ca-certificates/129900.pem
	I1208 04:39:25.066566  156821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129900.pem
	I1208 04:39:25.073509  156821 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1208 04:39:25.084641  156821 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/129900.pem /etc/ssl/certs/51391683.0
	I1208 04:39:25.095766  156821 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1208 04:39:25.100615  156821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1208 04:39:25.107558  156821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1208 04:39:25.114545  156821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1208 04:39:25.121516  156821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1208 04:39:25.128430  156821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1208 04:39:25.135595  156821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1208 04:39:25.142425  156821 kubeadm.go:401] StartCluster: {Name:test-preload-308311 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.2 ClusterName:test-preload-308311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 04:39:25.142510  156821 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1208 04:39:25.142562  156821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 04:39:25.175008  156821 cri.go:89] found id: ""
	I1208 04:39:25.175096  156821 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1208 04:39:25.187495  156821 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1208 04:39:25.187526  156821 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1208 04:39:25.187608  156821 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1208 04:39:25.199368  156821 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1208 04:39:25.199790  156821 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-308311" does not appear in /home/jenkins/minikube-integration/21409-125868/kubeconfig
	I1208 04:39:25.199890  156821 kubeconfig.go:62] /home/jenkins/minikube-integration/21409-125868/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-308311" cluster setting kubeconfig missing "test-preload-308311" context setting]
	I1208 04:39:25.200260  156821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/kubeconfig: {Name:mk83f735c71f0681683d120e6684a264c50ab0a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 04:39:25.200778  156821 kapi.go:59] client config for test-preload-308311: &rest.Config{Host:"https://192.168.39.42:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-125868/.minikube/profiles/test-preload-308311/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-125868/.minikube/profiles/test-preload-308311/client.key", CAFile:"/home/jenkins/minikube-integration/21409-125868/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 04:39:25.201272  156821 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1208 04:39:25.201289  156821 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1208 04:39:25.201293  156821 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1208 04:39:25.201297  156821 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1208 04:39:25.201301  156821 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1208 04:39:25.201677  156821 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1208 04:39:25.213078  156821 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.42
	I1208 04:39:25.213119  156821 kubeadm.go:1161] stopping kube-system containers ...
	I1208 04:39:25.213135  156821 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1208 04:39:25.213197  156821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1208 04:39:25.245247  156821 cri.go:89] found id: ""
	I1208 04:39:25.245342  156821 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1208 04:39:25.263888  156821 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1208 04:39:25.275687  156821 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1208 04:39:25.275720  156821 kubeadm.go:158] found existing configuration files:
	
	I1208 04:39:25.275786  156821 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1208 04:39:25.286481  156821 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1208 04:39:25.286561  156821 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1208 04:39:25.297886  156821 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1208 04:39:25.308693  156821 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1208 04:39:25.308767  156821 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1208 04:39:25.319949  156821 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1208 04:39:25.330681  156821 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1208 04:39:25.330743  156821 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1208 04:39:25.341882  156821 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1208 04:39:25.352055  156821 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1208 04:39:25.352120  156821 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1208 04:39:25.363128  156821 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1208 04:39:25.374061  156821 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 04:39:25.423077  156821 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 04:39:26.660096  156821 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.236959497s)
	I1208 04:39:26.660199  156821 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1208 04:39:26.901065  156821 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 04:39:26.972373  156821 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1208 04:39:27.034276  156821 api_server.go:52] waiting for apiserver process to appear ...
	I1208 04:39:27.034355  156821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 04:39:27.534711  156821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 04:39:28.034480  156821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 04:39:28.535476  156821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 04:39:29.035040  156821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 04:39:29.093550  156821 api_server.go:72] duration metric: took 2.059280026s to wait for apiserver process to appear ...
	I1208 04:39:29.093587  156821 api_server.go:88] waiting for apiserver healthz status ...
	I1208 04:39:29.093611  156821 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1208 04:39:29.094094  156821 api_server.go:269] stopped: https://192.168.39.42:8443/healthz: Get "https://192.168.39.42:8443/healthz": dial tcp 192.168.39.42:8443: connect: connection refused
	I1208 04:39:29.593763  156821 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1208 04:39:31.374918  156821 api_server.go:279] https://192.168.39.42:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1208 04:39:31.374948  156821 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1208 04:39:31.374966  156821 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1208 04:39:31.487353  156821 api_server.go:279] https://192.168.39.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1208 04:39:31.487385  156821 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1208 04:39:31.594702  156821 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1208 04:39:31.604690  156821 api_server.go:279] https://192.168.39.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1208 04:39:31.604717  156821 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1208 04:39:32.094484  156821 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1208 04:39:32.098974  156821 api_server.go:279] https://192.168.39.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1208 04:39:32.098998  156821 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1208 04:39:32.594710  156821 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1208 04:39:32.601029  156821 api_server.go:279] https://192.168.39.42:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1208 04:39:32.601053  156821 api_server.go:103] status: https://192.168.39.42:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1208 04:39:33.093746  156821 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1208 04:39:33.098332  156821 api_server.go:279] https://192.168.39.42:8443/healthz returned 200:
	ok
	I1208 04:39:33.104811  156821 api_server.go:141] control plane version: v1.34.2
	I1208 04:39:33.104836  156821 api_server.go:131] duration metric: took 4.011241591s to wait for apiserver health ...
	I1208 04:39:33.104845  156821 cni.go:84] Creating CNI manager for ""
	I1208 04:39:33.104850  156821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 04:39:33.106826  156821 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1208 04:39:33.107841  156821 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1208 04:39:33.129419  156821 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1208 04:39:33.162776  156821 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 04:39:33.167940  156821 system_pods.go:59] 7 kube-system pods found
	I1208 04:39:33.168000  156821 system_pods.go:61] "coredns-66bc5c9577-6nhcj" [eec02b86-9149-4d99-be2f-6fa44a44f412] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 04:39:33.168023  156821 system_pods.go:61] "etcd-test-preload-308311" [64ea6f5b-b99b-4268-b1ae-38b5ae8af6b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 04:39:33.168043  156821 system_pods.go:61] "kube-apiserver-test-preload-308311" [7e514cd7-3155-4008-8f41-a27c426237bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 04:39:33.168061  156821 system_pods.go:61] "kube-controller-manager-test-preload-308311" [882f73ee-2e7f-4d01-bb69-e83149494e53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1208 04:39:33.168076  156821 system_pods.go:61] "kube-proxy-24v6n" [c8b3d4ff-ec0a-42da-a3ae-c60be2370f51] Running
	I1208 04:39:33.168087  156821 system_pods.go:61] "kube-scheduler-test-preload-308311" [e303b745-9995-4cdc-8ab3-ffe40376a239] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 04:39:33.168105  156821 system_pods.go:61] "storage-provisioner" [e69ed018-d93d-435d-9366-82c6745f7192] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1208 04:39:33.168116  156821 system_pods.go:74] duration metric: took 5.31248ms to wait for pod list to return data ...
	I1208 04:39:33.168131  156821 node_conditions.go:102] verifying NodePressure condition ...
	I1208 04:39:33.178064  156821 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1208 04:39:33.178091  156821 node_conditions.go:123] node cpu capacity is 2
	I1208 04:39:33.178104  156821 node_conditions.go:105] duration metric: took 9.968359ms to run NodePressure ...
	I1208 04:39:33.178153  156821 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1208 04:39:33.432409  156821 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1208 04:39:33.436804  156821 kubeadm.go:744] kubelet initialised
	I1208 04:39:33.436825  156821 kubeadm.go:745] duration metric: took 4.38861ms waiting for restarted kubelet to initialise ...
	I1208 04:39:33.436846  156821 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1208 04:39:33.454923  156821 ops.go:34] apiserver oom_adj: -16
	I1208 04:39:33.454943  156821 kubeadm.go:602] duration metric: took 8.267409934s to restartPrimaryControlPlane
	I1208 04:39:33.454951  156821 kubeadm.go:403] duration metric: took 8.312535984s to StartCluster
	I1208 04:39:33.454970  156821 settings.go:142] acquiring lock: {Name:mk8cd1e38ee853efa0b11d6abb3aeb99916975f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 04:39:33.455048  156821 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21409-125868/kubeconfig
	I1208 04:39:33.455639  156821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21409-125868/kubeconfig: {Name:mk83f735c71f0681683d120e6684a264c50ab0a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1208 04:39:33.455927  156821 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1208 04:39:33.456065  156821 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1208 04:39:33.456125  156821 config.go:182] Loaded profile config "test-preload-308311": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 04:39:33.456165  156821 addons.go:70] Setting storage-provisioner=true in profile "test-preload-308311"
	I1208 04:39:33.456189  156821 addons.go:239] Setting addon storage-provisioner=true in "test-preload-308311"
	W1208 04:39:33.456201  156821 addons.go:248] addon storage-provisioner should already be in state true
	I1208 04:39:33.456205  156821 addons.go:70] Setting default-storageclass=true in profile "test-preload-308311"
	I1208 04:39:33.456244  156821 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-308311"
	I1208 04:39:33.456259  156821 host.go:66] Checking if "test-preload-308311" exists ...
	I1208 04:39:33.457353  156821 out.go:179] * Verifying Kubernetes components...
	I1208 04:39:33.458351  156821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1208 04:39:33.458797  156821 kapi.go:59] client config for test-preload-308311: &rest.Config{Host:"https://192.168.39.42:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-125868/.minikube/profiles/test-preload-308311/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-125868/.minikube/profiles/test-preload-308311/client.key", CAFile:"/home/jenkins/minikube-integration/21409-125868/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 04:39:33.459082  156821 addons.go:239] Setting addon default-storageclass=true in "test-preload-308311"
	W1208 04:39:33.459095  156821 addons.go:248] addon default-storageclass should already be in state true
	I1208 04:39:33.459111  156821 host.go:66] Checking if "test-preload-308311" exists ...
	I1208 04:39:33.459378  156821 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1208 04:39:33.460499  156821 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 04:39:33.460520  156821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1208 04:39:33.460560  156821 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1208 04:39:33.460573  156821 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1208 04:39:33.463490  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:33.463619  156821 main.go:143] libmachine: domain test-preload-308311 has defined MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:33.463984  156821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:ca:8b", ip: ""} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:39:16 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:33.464029  156821 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:f9:ca:8b", ip: ""} in network mk-test-preload-308311: {Iface:virbr1 ExpiryTime:2025-12-08 05:39:16 +0000 UTC Type:0 Mac:52:54:00:f9:ca:8b Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:test-preload-308311 Clientid:01:52:54:00:f9:ca:8b}
	I1208 04:39:33.464049  156821 main.go:143] libmachine: domain test-preload-308311 has defined IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:33.464092  156821 main.go:143] libmachine: domain test-preload-308311 has defined IP address 192.168.39.42 and MAC address 52:54:00:f9:ca:8b in network mk-test-preload-308311
	I1208 04:39:33.464280  156821 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/test-preload-308311/id_rsa Username:docker}
	I1208 04:39:33.464284  156821 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/test-preload-308311/id_rsa Username:docker}
	I1208 04:39:33.694636  156821 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1208 04:39:33.715512  156821 node_ready.go:35] waiting up to 6m0s for node "test-preload-308311" to be "Ready" ...
	I1208 04:39:33.719380  156821 node_ready.go:49] node "test-preload-308311" is "Ready"
	I1208 04:39:33.719403  156821 node_ready.go:38] duration metric: took 3.842119ms for node "test-preload-308311" to be "Ready" ...
	I1208 04:39:33.719415  156821 api_server.go:52] waiting for apiserver process to appear ...
	I1208 04:39:33.719461  156821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 04:39:33.737844  156821 api_server.go:72] duration metric: took 281.87537ms to wait for apiserver process to appear ...
	I1208 04:39:33.737867  156821 api_server.go:88] waiting for apiserver healthz status ...
	I1208 04:39:33.737883  156821 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1208 04:39:33.744605  156821 api_server.go:279] https://192.168.39.42:8443/healthz returned 200:
	ok
	I1208 04:39:33.745479  156821 api_server.go:141] control plane version: v1.34.2
	I1208 04:39:33.745511  156821 api_server.go:131] duration metric: took 7.635579ms to wait for apiserver health ...
	I1208 04:39:33.745522  156821 system_pods.go:43] waiting for kube-system pods to appear ...
	I1208 04:39:33.748721  156821 system_pods.go:59] 7 kube-system pods found
	I1208 04:39:33.748752  156821 system_pods.go:61] "coredns-66bc5c9577-6nhcj" [eec02b86-9149-4d99-be2f-6fa44a44f412] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 04:39:33.748766  156821 system_pods.go:61] "etcd-test-preload-308311" [64ea6f5b-b99b-4268-b1ae-38b5ae8af6b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 04:39:33.748781  156821 system_pods.go:61] "kube-apiserver-test-preload-308311" [7e514cd7-3155-4008-8f41-a27c426237bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 04:39:33.748790  156821 system_pods.go:61] "kube-controller-manager-test-preload-308311" [882f73ee-2e7f-4d01-bb69-e83149494e53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1208 04:39:33.748794  156821 system_pods.go:61] "kube-proxy-24v6n" [c8b3d4ff-ec0a-42da-a3ae-c60be2370f51] Running
	I1208 04:39:33.748801  156821 system_pods.go:61] "kube-scheduler-test-preload-308311" [e303b745-9995-4cdc-8ab3-ffe40376a239] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 04:39:33.748804  156821 system_pods.go:61] "storage-provisioner" [e69ed018-d93d-435d-9366-82c6745f7192] Running
	I1208 04:39:33.748814  156821 system_pods.go:74] duration metric: took 3.282559ms to wait for pod list to return data ...
	I1208 04:39:33.748821  156821 default_sa.go:34] waiting for default service account to be created ...
	I1208 04:39:33.751418  156821 default_sa.go:45] found service account: "default"
	I1208 04:39:33.751442  156821 default_sa.go:55] duration metric: took 2.613053ms for default service account to be created ...
	I1208 04:39:33.751453  156821 system_pods.go:116] waiting for k8s-apps to be running ...
	I1208 04:39:33.754569  156821 system_pods.go:86] 7 kube-system pods found
	I1208 04:39:33.754593  156821 system_pods.go:89] "coredns-66bc5c9577-6nhcj" [eec02b86-9149-4d99-be2f-6fa44a44f412] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1208 04:39:33.754602  156821 system_pods.go:89] "etcd-test-preload-308311" [64ea6f5b-b99b-4268-b1ae-38b5ae8af6b4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1208 04:39:33.754613  156821 system_pods.go:89] "kube-apiserver-test-preload-308311" [7e514cd7-3155-4008-8f41-a27c426237bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1208 04:39:33.754626  156821 system_pods.go:89] "kube-controller-manager-test-preload-308311" [882f73ee-2e7f-4d01-bb69-e83149494e53] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1208 04:39:33.754633  156821 system_pods.go:89] "kube-proxy-24v6n" [c8b3d4ff-ec0a-42da-a3ae-c60be2370f51] Running
	I1208 04:39:33.754639  156821 system_pods.go:89] "kube-scheduler-test-preload-308311" [e303b745-9995-4cdc-8ab3-ffe40376a239] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1208 04:39:33.754642  156821 system_pods.go:89] "storage-provisioner" [e69ed018-d93d-435d-9366-82c6745f7192] Running
	I1208 04:39:33.754649  156821 system_pods.go:126] duration metric: took 3.189013ms to wait for k8s-apps to be running ...
	I1208 04:39:33.754655  156821 system_svc.go:44] waiting for kubelet service to be running ....
	I1208 04:39:33.754701  156821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 04:39:33.770935  156821 system_svc.go:56] duration metric: took 16.27256ms WaitForService to wait for kubelet
	I1208 04:39:33.770954  156821 kubeadm.go:587] duration metric: took 314.990406ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1208 04:39:33.770969  156821 node_conditions.go:102] verifying NodePressure condition ...
	I1208 04:39:33.773366  156821 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1208 04:39:33.773383  156821 node_conditions.go:123] node cpu capacity is 2
	I1208 04:39:33.773394  156821 node_conditions.go:105] duration metric: took 2.420663ms to run NodePressure ...
	I1208 04:39:33.773404  156821 start.go:242] waiting for startup goroutines ...
	I1208 04:39:33.869153  156821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1208 04:39:33.871119  156821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1208 04:39:34.467960  156821 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1208 04:39:34.469026  156821 addons.go:530] duration metric: took 1.012977833s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1208 04:39:34.469065  156821 start.go:247] waiting for cluster config update ...
	I1208 04:39:34.469077  156821 start.go:256] writing updated cluster config ...
	I1208 04:39:34.469340  156821 ssh_runner.go:195] Run: rm -f paused
	I1208 04:39:34.474591  156821 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 04:39:34.475081  156821 kapi.go:59] client config for test-preload-308311: &rest.Config{Host:"https://192.168.39.42:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21409-125868/.minikube/profiles/test-preload-308311/client.crt", KeyFile:"/home/jenkins/minikube-integration/21409-125868/.minikube/profiles/test-preload-308311/client.key", CAFile:"/home/jenkins/minikube-integration/21409-125868/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28156e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1208 04:39:34.477716  156821 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6nhcj" in "kube-system" namespace to be "Ready" or be gone ...
	W1208 04:39:36.484025  156821 pod_ready.go:104] pod "coredns-66bc5c9577-6nhcj" is not "Ready", error: <nil>
	W1208 04:39:38.983187  156821 pod_ready.go:104] pod "coredns-66bc5c9577-6nhcj" is not "Ready", error: <nil>
	W1208 04:39:40.984072  156821 pod_ready.go:104] pod "coredns-66bc5c9577-6nhcj" is not "Ready", error: <nil>
	W1208 04:39:42.985251  156821 pod_ready.go:104] pod "coredns-66bc5c9577-6nhcj" is not "Ready", error: <nil>
	W1208 04:39:45.483776  156821 pod_ready.go:104] pod "coredns-66bc5c9577-6nhcj" is not "Ready", error: <nil>
	I1208 04:39:45.983329  156821 pod_ready.go:94] pod "coredns-66bc5c9577-6nhcj" is "Ready"
	I1208 04:39:45.983359  156821 pod_ready.go:86] duration metric: took 11.505615213s for pod "coredns-66bc5c9577-6nhcj" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 04:39:45.985915  156821 pod_ready.go:83] waiting for pod "etcd-test-preload-308311" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 04:39:45.990490  156821 pod_ready.go:94] pod "etcd-test-preload-308311" is "Ready"
	I1208 04:39:45.990519  156821 pod_ready.go:86] duration metric: took 4.583142ms for pod "etcd-test-preload-308311" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 04:39:45.992300  156821 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-308311" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 04:39:45.997273  156821 pod_ready.go:94] pod "kube-apiserver-test-preload-308311" is "Ready"
	I1208 04:39:45.997297  156821 pod_ready.go:86] duration metric: took 4.97576ms for pod "kube-apiserver-test-preload-308311" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 04:39:45.999053  156821 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-308311" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 04:39:46.182326  156821 pod_ready.go:94] pod "kube-controller-manager-test-preload-308311" is "Ready"
	I1208 04:39:46.182371  156821 pod_ready.go:86] duration metric: took 183.286182ms for pod "kube-controller-manager-test-preload-308311" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 04:39:46.381533  156821 pod_ready.go:83] waiting for pod "kube-proxy-24v6n" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 04:39:46.782066  156821 pod_ready.go:94] pod "kube-proxy-24v6n" is "Ready"
	I1208 04:39:46.782098  156821 pod_ready.go:86] duration metric: took 400.535112ms for pod "kube-proxy-24v6n" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 04:39:46.982749  156821 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-308311" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 04:39:47.382907  156821 pod_ready.go:94] pod "kube-scheduler-test-preload-308311" is "Ready"
	I1208 04:39:47.382932  156821 pod_ready.go:86] duration metric: took 400.156759ms for pod "kube-scheduler-test-preload-308311" in "kube-system" namespace to be "Ready" or be gone ...
	I1208 04:39:47.382943  156821 pod_ready.go:40] duration metric: took 12.908322757s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1208 04:39:47.426476  156821 start.go:625] kubectl: 1.34.2, cluster: 1.34.2 (minor skew: 0)
	I1208 04:39:47.428132  156821 out.go:179] * Done! kubectl is now configured to use "test-preload-308311" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.165403294Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765168788165382710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c639edfe-921c-4116-94b8-3c1f3c9b67ec name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.166103943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=75276eb7-3596-48c4-8741-1ee5fda91a59 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.166165918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=75276eb7-3596-48c4-8741-1ee5fda91a59 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.166331328Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:317144ccbdd718145542c206b94938a8387f4415b4a2ff11f8ea77776bc16bae,PodSandboxId:868d24f33f92079319462075a318f78750b9e1a42b591ab6cedfe9fc04e37e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765168776026930781,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6nhcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eec02b86-9149-4d99-be2f-6fa44a44f412,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eca1dfec6790c68e21570cd1666823d259fcc01a609009f1d1a0c7aaf1933f3,PodSandboxId:a1ca6a5f2f0ff8065661539408b60ae1cd216f9edbd96463e04b33314acc5602,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765168772449166048,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24v6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8b3d4ff-ec0a-42da-a3ae-c60be2370f51,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce3bf71509f98241e58351e3b5f9472e2f3a4ab69f5eb9cef2c30ac5e4bd8f6f,PodSandboxId:e738210070cfaa631774ce07b74b423a8bf78bcbcc1be967dd368d87962717b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765168772494704193,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e69ed018-d93d-435d-9366-82c6745f7192,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7f7a9c4c09dea88ba00c40be7d01a68746e09e36e4ec0757f1489eb01e94ba,PodSandboxId:f38d29226e963a70b69b880b98ec93ee963f12d4ec993719e3ea8b0ec757dd81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765168768853349435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-308311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4642d6a8ca284eca4192524de3f7a362,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed45dcac9dbf8ad0e2f8607456a886c33f8d849aa013778bcaf75d85a2f99406,PodSandboxId:c01a736bbaf727b78f4121a09b2a19fab817f13cfe645a7b4c484e98552fb3bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,Crea
tedAt:1765168768831427008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-308311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c505b375ae877bb67d4344548052307a,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d232b03e5012b3e91f525b59cc45c377563e61592f8f38e4447a0a8b21d77e02,PodSandboxId:0e83a5f0c4ac251a9d3b82f5d41cadb6ac7b1e0269f5af6aa248979408e1d050,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765168768804153896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-308311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd7831eca28816963255f2a8300fa08b,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6356cd819fe8a88b800f31bc7206eb220f8d934db032df385defb37595089275,PodSandboxId:206cd6585e4a3f702b2955ceb048ed35f0716b93683268a5e2cdac6f14ad53ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765168768789788755,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-308311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7bd1df4b46c643299188f5b24757fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=75276eb7-3596-48c4-8741-1ee5fda91a59 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.198135648Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=af490e8d-c331-4172-b02d-8db5a728dbed name=/runtime.v1.RuntimeService/Version
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.198216322Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=af490e8d-c331-4172-b02d-8db5a728dbed name=/runtime.v1.RuntimeService/Version
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.199231312Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=587e6819-8902-411f-bfbd-471458cdb757 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.199659509Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765168788199581229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=587e6819-8902-411f-bfbd-471458cdb757 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.200376152Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26e1a22f-7f97-43f3-9ea5-508f3b827226 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.200454623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26e1a22f-7f97-43f3-9ea5-508f3b827226 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.200644817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:317144ccbdd718145542c206b94938a8387f4415b4a2ff11f8ea77776bc16bae,PodSandboxId:868d24f33f92079319462075a318f78750b9e1a42b591ab6cedfe9fc04e37e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765168776026930781,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6nhcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eec02b86-9149-4d99-be2f-6fa44a44f412,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eca1dfec6790c68e21570cd1666823d259fcc01a609009f1d1a0c7aaf1933f3,PodSandboxId:a1ca6a5f2f0ff8065661539408b60ae1cd216f9edbd96463e04b33314acc5602,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765168772449166048,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24v6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8b3d4ff-ec0a-42da-a3ae-c60be2370f51,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce3bf71509f98241e58351e3b5f9472e2f3a4ab69f5eb9cef2c30ac5e4bd8f6f,PodSandboxId:e738210070cfaa631774ce07b74b423a8bf78bcbcc1be967dd368d87962717b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765168772494704193,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e69ed018-d93d-435d-9366-82c6745f7192,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7f7a9c4c09dea88ba00c40be7d01a68746e09e36e4ec0757f1489eb01e94ba,PodSandboxId:f38d29226e963a70b69b880b98ec93ee963f12d4ec993719e3ea8b0ec757dd81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765168768853349435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-308311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4642d6a8ca284eca4192524de3f7a362,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed45dcac9dbf8ad0e2f8607456a886c33f8d849aa013778bcaf75d85a2f99406,PodSandboxId:c01a736bbaf727b78f4121a09b2a19fab817f13cfe645a7b4c484e98552fb3bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,Crea
tedAt:1765168768831427008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-308311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c505b375ae877bb67d4344548052307a,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d232b03e5012b3e91f525b59cc45c377563e61592f8f38e4447a0a8b21d77e02,PodSandboxId:0e83a5f0c4ac251a9d3b82f5d41cadb6ac7b1e0269f5af6aa248979408e1d050,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765168768804153896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-308311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd7831eca28816963255f2a8300fa08b,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6356cd819fe8a88b800f31bc7206eb220f8d934db032df385defb37595089275,PodSandboxId:206cd6585e4a3f702b2955ceb048ed35f0716b93683268a5e2cdac6f14ad53ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765168768789788755,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-308311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7bd1df4b46c643299188f5b24757fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26e1a22f-7f97-43f3-9ea5-508f3b827226 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.231129663Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e01b9eeb-3d2c-4e2d-9299-18dba106b7e1 name=/runtime.v1.RuntimeService/Version
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.231193806Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e01b9eeb-3d2c-4e2d-9299-18dba106b7e1 name=/runtime.v1.RuntimeService/Version
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.232465862Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5375ada-3bba-44d7-9d18-0178d3281141 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.233321575Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765168788233298778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5375ada-3bba-44d7-9d18-0178d3281141 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.234325069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b1ca1a9-9ad2-4248-8253-5ada5c68e254 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.234431182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b1ca1a9-9ad2-4248-8253-5ada5c68e254 name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.234747411Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:317144ccbdd718145542c206b94938a8387f4415b4a2ff11f8ea77776bc16bae,PodSandboxId:868d24f33f92079319462075a318f78750b9e1a42b591ab6cedfe9fc04e37e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765168776026930781,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6nhcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eec02b86-9149-4d99-be2f-6fa44a44f412,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eca1dfec6790c68e21570cd1666823d259fcc01a609009f1d1a0c7aaf1933f3,PodSandboxId:a1ca6a5f2f0ff8065661539408b60ae1cd216f9edbd96463e04b33314acc5602,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765168772449166048,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24v6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8b3d4ff-ec0a-42da-a3ae-c60be2370f51,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce3bf71509f98241e58351e3b5f9472e2f3a4ab69f5eb9cef2c30ac5e4bd8f6f,PodSandboxId:e738210070cfaa631774ce07b74b423a8bf78bcbcc1be967dd368d87962717b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765168772494704193,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e69ed018-d93d-435d-9366-82c6745f7192,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7f7a9c4c09dea88ba00c40be7d01a68746e09e36e4ec0757f1489eb01e94ba,PodSandboxId:f38d29226e963a70b69b880b98ec93ee963f12d4ec993719e3ea8b0ec757dd81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765168768853349435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-308311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4642d6a8ca284eca4192524de3f7a362,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed45dcac9dbf8ad0e2f8607456a886c33f8d849aa013778bcaf75d85a2f99406,PodSandboxId:c01a736bbaf727b78f4121a09b2a19fab817f13cfe645a7b4c484e98552fb3bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,Crea
tedAt:1765168768831427008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-308311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c505b375ae877bb67d4344548052307a,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d232b03e5012b3e91f525b59cc45c377563e61592f8f38e4447a0a8b21d77e02,PodSandboxId:0e83a5f0c4ac251a9d3b82f5d41cadb6ac7b1e0269f5af6aa248979408e1d050,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765168768804153896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-308311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd7831eca28816963255f2a8300fa08b,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6356cd819fe8a88b800f31bc7206eb220f8d934db032df385defb37595089275,PodSandboxId:206cd6585e4a3f702b2955ceb048ed35f0716b93683268a5e2cdac6f14ad53ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765168768789788755,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-308311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7bd1df4b46c643299188f5b24757fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b1ca1a9-9ad2-4248-8253-5ada5c68e254 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.260551091Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f06b100-84f4-455e-ad3c-75140afcf277 name=/runtime.v1.RuntimeService/Version
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.260657376Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f06b100-84f4-455e-ad3c-75140afcf277 name=/runtime.v1.RuntimeService/Version
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.262229519Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2e254090-1a47-4571-8e92-5869863110ea name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.262681839Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765168788262561106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e254090-1a47-4571-8e92-5869863110ea name=/runtime.v1.ImageService/ImageFsInfo
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.263412279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92f444e5-bf1c-4de8-b818-423e2465f89f name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.263462458Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92f444e5-bf1c-4de8-b818-423e2465f89f name=/runtime.v1.RuntimeService/ListContainers
	Dec 08 04:39:48 test-preload-308311 crio[832]: time="2025-12-08 04:39:48.263650132Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:317144ccbdd718145542c206b94938a8387f4415b4a2ff11f8ea77776bc16bae,PodSandboxId:868d24f33f92079319462075a318f78750b9e1a42b591ab6cedfe9fc04e37e9e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765168776026930781,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6nhcj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eec02b86-9149-4d99-be2f-6fa44a44f412,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3eca1dfec6790c68e21570cd1666823d259fcc01a609009f1d1a0c7aaf1933f3,PodSandboxId:a1ca6a5f2f0ff8065661539408b60ae1cd216f9edbd96463e04b33314acc5602,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765168772449166048,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-24v6n,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8b3d4ff-ec0a-42da-a3ae-c60be2370f51,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce3bf71509f98241e58351e3b5f9472e2f3a4ab69f5eb9cef2c30ac5e4bd8f6f,PodSandboxId:e738210070cfaa631774ce07b74b423a8bf78bcbcc1be967dd368d87962717b7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765168772494704193,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e69ed018-d93d-435d-9366-82c6745f7192,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef7f7a9c4c09dea88ba00c40be7d01a68746e09e36e4ec0757f1489eb01e94ba,PodSandboxId:f38d29226e963a70b69b880b98ec93ee963f12d4ec993719e3ea8b0ec757dd81,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765168768853349435,Labels:map[string]string{io.kubernetes.container.name: etcd,io.k
ubernetes.pod.name: etcd-test-preload-308311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4642d6a8ca284eca4192524de3f7a362,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed45dcac9dbf8ad0e2f8607456a886c33f8d849aa013778bcaf75d85a2f99406,PodSandboxId:c01a736bbaf727b78f4121a09b2a19fab817f13cfe645a7b4c484e98552fb3bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,Crea
tedAt:1765168768831427008,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-308311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c505b375ae877bb67d4344548052307a,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d232b03e5012b3e91f525b59cc45c377563e61592f8f38e4447a0a8b21d77e02,PodSandboxId:0e83a5f0c4ac251a9d3b82f5d41cadb6ac7b1e0269f5af6aa248979408e1d050,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHan
dler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765168768804153896,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-308311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd7831eca28816963255f2a8300fa08b,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6356cd819fe8a88b800f31bc7206eb220f8d934db032df385defb37595089275,PodSandboxId:206cd6585e4a3f702b2955ceb048ed35f0716b93683268a5e2cdac6f14ad53ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8ba
cf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765168768789788755,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-308311,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7bd1df4b46c643299188f5b24757fd7,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92f444e5-bf1c-4de8-b818-423e2465f89f name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	317144ccbdd71       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   1                   868d24f33f920       coredns-66bc5c9577-6nhcj                      kube-system
	ce3bf71509f98       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       1                   e738210070cfa       storage-provisioner                           kube-system
	3eca1dfec6790       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   15 seconds ago      Running             kube-proxy                1                   a1ca6a5f2f0ff       kube-proxy-24v6n                              kube-system
	ef7f7a9c4c09d       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   19 seconds ago      Running             etcd                      1                   f38d29226e963       etcd-test-preload-308311                      kube-system
	ed45dcac9dbf8       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   19 seconds ago      Running             kube-apiserver            1                   c01a736bbaf72       kube-apiserver-test-preload-308311            kube-system
	d232b03e5012b       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   19 seconds ago      Running             kube-scheduler            1                   0e83a5f0c4ac2       kube-scheduler-test-preload-308311            kube-system
	6356cd819fe8a       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   19 seconds ago      Running             kube-controller-manager   1                   206cd6585e4a3       kube-controller-manager-test-preload-308311   kube-system
	
	
	==> coredns [317144ccbdd718145542c206b94938a8387f4415b4a2ff11f8ea77776bc16bae] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:32772 - 45962 "HINFO IN 3248501550155063216.4182229309244545490. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027488021s
	
	
	==> describe nodes <==
	Name:               test-preload-308311
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-308311
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=730a0938e5fe3e95dced085e5e597b4345feecad
	                    minikube.k8s.io/name=test-preload-308311
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_08T04_38_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 08 Dec 2025 04:38:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-308311
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 08 Dec 2025 04:39:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 08 Dec 2025 04:39:33 +0000   Mon, 08 Dec 2025 04:37:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 08 Dec 2025 04:39:33 +0000   Mon, 08 Dec 2025 04:37:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 08 Dec 2025 04:39:33 +0000   Mon, 08 Dec 2025 04:37:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 08 Dec 2025 04:39:33 +0000   Mon, 08 Dec 2025 04:39:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.42
	  Hostname:    test-preload-308311
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 d5d0c01c4dee420f94db317e89f92d90
	  System UUID:                d5d0c01c-4dee-420f-94db-317e89f92d90
	  Boot ID:                    7793e41b-0164-43b7-ab0a-f5048def79da
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-6nhcj                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     100s
	  kube-system                 etcd-test-preload-308311                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         105s
	  kube-system                 kube-apiserver-test-preload-308311             250m (12%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-controller-manager-test-preload-308311    200m (10%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-proxy-24v6n                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-scheduler-test-preload-308311             100m (5%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 98s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   NodeHasSufficientMemory  105s               kubelet          Node test-preload-308311 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  105s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    105s               kubelet          Node test-preload-308311 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     105s               kubelet          Node test-preload-308311 status is now: NodeHasSufficientPID
	  Normal   Starting                 105s               kubelet          Starting kubelet.
	  Normal   NodeReady                104s               kubelet          Node test-preload-308311 status is now: NodeReady
	  Normal   RegisteredNode           101s               node-controller  Node test-preload-308311 event: Registered Node test-preload-308311 in Controller
	  Normal   Starting                 22s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-308311 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-308311 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-308311 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                kubelet          Node test-preload-308311 has been rebooted, boot id: 7793e41b-0164-43b7-ab0a-f5048def79da
	  Normal   RegisteredNode           14s                node-controller  Node test-preload-308311 event: Registered Node test-preload-308311 in Controller
	
	
	==> dmesg <==
	[Dec 8 04:39] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001488] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.000733] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.993121] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.118590] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.095118] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.456551] kauditd_printk_skb: 168 callbacks suppressed
	[  +9.735265] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [ef7f7a9c4c09dea88ba00c40be7d01a68746e09e36e4ec0757f1489eb01e94ba] <==
	{"level":"warn","ts":"2025-12-08T04:39:30.510389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.525819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.534709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.543558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.559316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.569642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.581772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.589661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.601932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.611292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.619003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.626559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.633382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.641477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.650246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.659020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.666072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.673552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.680795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.692032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39294","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:39294: read: connection reset by peer"}
	{"level":"warn","ts":"2025-12-08T04:39:30.702962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.707993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.716708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.736452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-08T04:39:30.778570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39386","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 04:39:48 up 0 min,  0 users,  load average: 0.71, 0.19, 0.06
	Linux test-preload-308311 6.6.95 #1 SMP PREEMPT_DYNAMIC Mon Dec  8 03:04:10 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ed45dcac9dbf8ad0e2f8607456a886c33f8d849aa013778bcaf75d85a2f99406] <==
	I1208 04:39:31.396498       1 aggregator.go:171] initial CRD sync complete...
	I1208 04:39:31.396534       1 autoregister_controller.go:144] Starting autoregister controller
	I1208 04:39:31.396540       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1208 04:39:31.396545       1 cache.go:39] Caches are synced for autoregister controller
	I1208 04:39:31.400156       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1208 04:39:31.400359       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1208 04:39:31.400436       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1208 04:39:31.411577       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1208 04:39:31.411663       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1208 04:39:31.414844       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1208 04:39:31.427021       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1208 04:39:31.427259       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1208 04:39:31.427845       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1208 04:39:31.452210       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1208 04:39:31.452263       1 policy_source.go:240] refreshing policies
	I1208 04:39:31.452704       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1208 04:39:32.058327       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1208 04:39:32.297374       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1208 04:39:33.230398       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1208 04:39:33.267781       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1208 04:39:33.295630       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1208 04:39:33.303577       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1208 04:39:34.901933       1 controller.go:667] quota admission added evaluator for: endpoints
	I1208 04:39:35.051036       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1208 04:39:35.101245       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6356cd819fe8a88b800f31bc7206eb220f8d934db032df385defb37595089275] <==
	I1208 04:39:34.746185       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1208 04:39:34.746232       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1208 04:39:34.746658       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1208 04:39:34.746675       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1208 04:39:34.746686       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1208 04:39:34.746704       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1208 04:39:34.746774       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1208 04:39:34.748025       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1208 04:39:34.748463       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1208 04:39:34.749111       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1208 04:39:34.751112       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1208 04:39:34.751484       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1208 04:39:34.751542       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1208 04:39:34.751891       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1208 04:39:34.762303       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1208 04:39:34.762519       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1208 04:39:34.762880       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1208 04:39:34.762861       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1208 04:39:34.772958       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1208 04:39:34.775377       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1208 04:39:34.782644       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1208 04:39:34.787940       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1208 04:39:34.788013       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1208 04:39:34.788058       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-308311"
	I1208 04:39:34.788102       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [3eca1dfec6790c68e21570cd1666823d259fcc01a609009f1d1a0c7aaf1933f3] <==
	I1208 04:39:32.739472       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1208 04:39:32.840112       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1208 04:39:32.840167       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.42"]
	E1208 04:39:32.840233       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1208 04:39:32.874049       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1208 04:39:32.874173       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1208 04:39:32.874212       1 server_linux.go:132] "Using iptables Proxier"
	I1208 04:39:32.882732       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1208 04:39:32.883033       1 server.go:527] "Version info" version="v1.34.2"
	I1208 04:39:32.883060       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 04:39:32.887322       1 config.go:200] "Starting service config controller"
	I1208 04:39:32.887353       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1208 04:39:32.887371       1 config.go:106] "Starting endpoint slice config controller"
	I1208 04:39:32.887375       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1208 04:39:32.887399       1 config.go:403] "Starting serviceCIDR config controller"
	I1208 04:39:32.887403       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1208 04:39:32.887506       1 config.go:309] "Starting node config controller"
	I1208 04:39:32.887515       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1208 04:39:32.987450       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1208 04:39:32.987476       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1208 04:39:32.987482       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1208 04:39:32.987545       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [d232b03e5012b3e91f525b59cc45c377563e61592f8f38e4447a0a8b21d77e02] <==
	I1208 04:39:30.065215       1 serving.go:386] Generated self-signed cert in-memory
	W1208 04:39:31.351643       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1208 04:39:31.352689       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1208 04:39:31.352777       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1208 04:39:31.352800       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1208 04:39:31.428637       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1208 04:39:31.429292       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1208 04:39:31.434962       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1208 04:39:31.435668       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 04:39:31.436233       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1208 04:39:31.435680       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1208 04:39:31.537542       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 08 04:39:31 test-preload-308311 kubelet[1177]: E1208 04:39:31.589830    1177 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-308311\" already exists" pod="kube-system/etcd-test-preload-308311"
	Dec 08 04:39:31 test-preload-308311 kubelet[1177]: I1208 04:39:31.939157    1177 apiserver.go:52] "Watching apiserver"
	Dec 08 04:39:31 test-preload-308311 kubelet[1177]: E1208 04:39:31.943947    1177 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-6nhcj" podUID="eec02b86-9149-4d99-be2f-6fa44a44f412"
	Dec 08 04:39:31 test-preload-308311 kubelet[1177]: I1208 04:39:31.976910    1177 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 08 04:39:32 test-preload-308311 kubelet[1177]: I1208 04:39:32.054547    1177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8b3d4ff-ec0a-42da-a3ae-c60be2370f51-xtables-lock\") pod \"kube-proxy-24v6n\" (UID: \"c8b3d4ff-ec0a-42da-a3ae-c60be2370f51\") " pod="kube-system/kube-proxy-24v6n"
	Dec 08 04:39:32 test-preload-308311 kubelet[1177]: I1208 04:39:32.054618    1177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8b3d4ff-ec0a-42da-a3ae-c60be2370f51-lib-modules\") pod \"kube-proxy-24v6n\" (UID: \"c8b3d4ff-ec0a-42da-a3ae-c60be2370f51\") " pod="kube-system/kube-proxy-24v6n"
	Dec 08 04:39:32 test-preload-308311 kubelet[1177]: I1208 04:39:32.055855    1177 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e69ed018-d93d-435d-9366-82c6745f7192-tmp\") pod \"storage-provisioner\" (UID: \"e69ed018-d93d-435d-9366-82c6745f7192\") " pod="kube-system/storage-provisioner"
	Dec 08 04:39:32 test-preload-308311 kubelet[1177]: E1208 04:39:32.056453    1177 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 08 04:39:32 test-preload-308311 kubelet[1177]: E1208 04:39:32.056513    1177 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eec02b86-9149-4d99-be2f-6fa44a44f412-config-volume podName:eec02b86-9149-4d99-be2f-6fa44a44f412 nodeName:}" failed. No retries permitted until 2025-12-08 04:39:32.556495783 +0000 UTC m=+5.704825904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/eec02b86-9149-4d99-be2f-6fa44a44f412-config-volume") pod "coredns-66bc5c9577-6nhcj" (UID: "eec02b86-9149-4d99-be2f-6fa44a44f412") : object "kube-system"/"coredns" not registered
	Dec 08 04:39:32 test-preload-308311 kubelet[1177]: I1208 04:39:32.081757    1177 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-308311"
	Dec 08 04:39:32 test-preload-308311 kubelet[1177]: I1208 04:39:32.082147    1177 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-308311"
	Dec 08 04:39:32 test-preload-308311 kubelet[1177]: I1208 04:39:32.082430    1177 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-308311"
	Dec 08 04:39:32 test-preload-308311 kubelet[1177]: E1208 04:39:32.098118    1177 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-308311\" already exists" pod="kube-system/kube-apiserver-test-preload-308311"
	Dec 08 04:39:32 test-preload-308311 kubelet[1177]: E1208 04:39:32.098137    1177 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-308311\" already exists" pod="kube-system/kube-scheduler-test-preload-308311"
	Dec 08 04:39:32 test-preload-308311 kubelet[1177]: E1208 04:39:32.098751    1177 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-308311\" already exists" pod="kube-system/etcd-test-preload-308311"
	Dec 08 04:39:32 test-preload-308311 kubelet[1177]: E1208 04:39:32.559788    1177 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 08 04:39:32 test-preload-308311 kubelet[1177]: E1208 04:39:32.560127    1177 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eec02b86-9149-4d99-be2f-6fa44a44f412-config-volume podName:eec02b86-9149-4d99-be2f-6fa44a44f412 nodeName:}" failed. No retries permitted until 2025-12-08 04:39:33.559841701 +0000 UTC m=+6.708171821 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/eec02b86-9149-4d99-be2f-6fa44a44f412-config-volume") pod "coredns-66bc5c9577-6nhcj" (UID: "eec02b86-9149-4d99-be2f-6fa44a44f412") : object "kube-system"/"coredns" not registered
	Dec 08 04:39:33 test-preload-308311 kubelet[1177]: I1208 04:39:33.448128    1177 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 08 04:39:33 test-preload-308311 kubelet[1177]: E1208 04:39:33.567157    1177 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 08 04:39:33 test-preload-308311 kubelet[1177]: E1208 04:39:33.567258    1177 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eec02b86-9149-4d99-be2f-6fa44a44f412-config-volume podName:eec02b86-9149-4d99-be2f-6fa44a44f412 nodeName:}" failed. No retries permitted until 2025-12-08 04:39:35.567245295 +0000 UTC m=+8.715575417 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/eec02b86-9149-4d99-be2f-6fa44a44f412-config-volume") pod "coredns-66bc5c9577-6nhcj" (UID: "eec02b86-9149-4d99-be2f-6fa44a44f412") : object "kube-system"/"coredns" not registered
	Dec 08 04:39:37 test-preload-308311 kubelet[1177]: E1208 04:39:37.037371    1177 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765168777037081864 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 08 04:39:37 test-preload-308311 kubelet[1177]: E1208 04:39:37.037406    1177 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765168777037081864 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 08 04:39:45 test-preload-308311 kubelet[1177]: I1208 04:39:45.594734    1177 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 08 04:39:47 test-preload-308311 kubelet[1177]: E1208 04:39:47.040344    1177 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765168787038888186 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 08 04:39:47 test-preload-308311 kubelet[1177]: E1208 04:39:47.040394    1177 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765168787038888186 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	
	
	==> storage-provisioner [ce3bf71509f98241e58351e3b5f9472e2f3a4ab69f5eb9cef2c30ac5e4bd8f6f] <==
	I1208 04:39:32.634295       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-308311 -n test-preload-308311
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-308311 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-308311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-308311
--- FAIL: TestPreload (150.68s)

                                                
                                    

Test pass (382/437)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 27.99
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.17
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.2/json-events 11.49
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.08
18 TestDownloadOnly/v1.34.2/DeleteAll 0.16
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.16
21 TestDownloadOnly/v1.35.0-beta.0/json-events 12.34
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.17
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.7
31 TestOffline 78.57
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 129.34
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/serial/GCPAuth/FakeCredentials 10.54
44 TestAddons/parallel/Registry 19.37
45 TestAddons/parallel/RegistryCreds 0.73
47 TestAddons/parallel/InspektorGadget 10.93
48 TestAddons/parallel/MetricsServer 5.98
50 TestAddons/parallel/CSI 57.02
51 TestAddons/parallel/Headlamp 22.87
52 TestAddons/parallel/CloudSpanner 5.57
53 TestAddons/parallel/LocalPath 58.88
54 TestAddons/parallel/NvidiaDevicePlugin 7
55 TestAddons/parallel/Yakd 11.79
57 TestAddons/StoppedEnableDisable 71.84
58 TestCertOptions 76.84
59 TestCertExpiration 290.56
61 TestForceSystemdFlag 54.46
62 TestForceSystemdEnv 67.66
67 TestErrorSpam/setup 35.44
68 TestErrorSpam/start 0.32
69 TestErrorSpam/status 0.64
70 TestErrorSpam/pause 1.46
71 TestErrorSpam/unpause 1.58
72 TestErrorSpam/stop 5.22
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 50.1
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 35.4
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.12
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.1
84 TestFunctional/serial/CacheCmd/cache/add_local 2.22
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.41
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 38.55
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.21
95 TestFunctional/serial/LogsFileCmd 1.2
96 TestFunctional/serial/InvalidService 4.07
98 TestFunctional/parallel/ConfigCmd 0.44
99 TestFunctional/parallel/DashboardCmd 43.83
100 TestFunctional/parallel/DryRun 0.23
101 TestFunctional/parallel/InternationalLanguage 0.12
102 TestFunctional/parallel/StatusCmd 0.89
106 TestFunctional/parallel/ServiceCmdConnect 9.52
107 TestFunctional/parallel/AddonsCmd 0.16
108 TestFunctional/parallel/PersistentVolumeClaim 49.84
110 TestFunctional/parallel/SSHCmd 0.37
111 TestFunctional/parallel/CpCmd 1.19
112 TestFunctional/parallel/MySQL 28.64
113 TestFunctional/parallel/FileSync 0.19
114 TestFunctional/parallel/CertSync 0.99
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.34
122 TestFunctional/parallel/License 0.44
123 TestFunctional/parallel/ServiceCmd/DeployApp 9.2
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
134 TestFunctional/parallel/ProfileCmd/profile_list 0.31
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
136 TestFunctional/parallel/Version/short 0.08
137 TestFunctional/parallel/Version/components 0.58
138 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
139 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
140 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
141 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
142 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
143 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
144 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
145 TestFunctional/parallel/ImageCommands/ImageBuild 4.43
146 TestFunctional/parallel/ImageCommands/Setup 1.95
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.18
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.82
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.83
150 TestFunctional/parallel/ServiceCmd/List 0.24
151 TestFunctional/parallel/ServiceCmd/JSONOutput 0.22
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.27
153 TestFunctional/parallel/ServiceCmd/Format 0.26
154 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.66
155 TestFunctional/parallel/ServiceCmd/URL 0.38
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
157 TestFunctional/parallel/MountCmd/any-port 25.3
158 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
159 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
160 TestFunctional/parallel/MountCmd/specific-port 1.31
161 TestFunctional/parallel/MountCmd/VerifyCleanup 1.17
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 74.09
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 29.09
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.1
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.03
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 2.22
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.07
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.07
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.18
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.51
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.13
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.13
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.12
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 36.4
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.07
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.2
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.21
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.04
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.47
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 13.71
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.21
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.12
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.65
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 10.48
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.2
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.36
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.19
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 30.38
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.24
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.22
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.06
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.34
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.41
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.07
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.08
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.07
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 10.19
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.4
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.32
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.3
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 9.05
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.38
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.22
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.23
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.25
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.22
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 4.62
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.91
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.1
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.85
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.72
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.6
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.44
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.66
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.54
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.3
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.73
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.27
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.38
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.39
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.37
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.19
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 193.07
262 TestMultiControlPlane/serial/DeployApp 7.56
263 TestMultiControlPlane/serial/PingHostFromPods 1.3
264 TestMultiControlPlane/serial/AddWorkerNode 41.56
265 TestMultiControlPlane/serial/NodeLabels 0.08
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.65
267 TestMultiControlPlane/serial/CopyFile 10.57
268 TestMultiControlPlane/serial/StopSecondaryNode 86.82
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.49
270 TestMultiControlPlane/serial/RestartSecondaryNode 34.39
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.66
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 375.71
273 TestMultiControlPlane/serial/DeleteSecondaryNode 17.78
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.5
275 TestMultiControlPlane/serial/StopCluster 258.72
276 TestMultiControlPlane/serial/RestartCluster 104.6
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
278 TestMultiControlPlane/serial/AddSecondaryNode 72.96
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.66
284 TestJSONOutput/start/Command 72.54
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.69
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.61
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 6.83
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.23
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 74.22
316 TestMountStart/serial/StartWithMountFirst 19.66
317 TestMountStart/serial/VerifyMountFirst 0.32
318 TestMountStart/serial/StartWithMountSecond 20.18
319 TestMountStart/serial/VerifyMountSecond 0.31
320 TestMountStart/serial/DeleteFirst 0.66
321 TestMountStart/serial/VerifyMountPostDelete 0.31
322 TestMountStart/serial/Stop 1.21
323 TestMountStart/serial/RestartStopped 18.33
324 TestMountStart/serial/VerifyMountPostStop 0.31
327 TestMultiNode/serial/FreshStart2Nodes 94.61
328 TestMultiNode/serial/DeployApp2Nodes 5.83
329 TestMultiNode/serial/PingHostFrom2Pods 0.84
330 TestMultiNode/serial/AddNode 42.4
331 TestMultiNode/serial/MultiNodeLabels 0.06
332 TestMultiNode/serial/ProfileList 0.45
333 TestMultiNode/serial/CopyFile 6.03
334 TestMultiNode/serial/StopNode 2.18
335 TestMultiNode/serial/StartAfterStop 37.66
336 TestMultiNode/serial/RestartKeepsNodes 326.19
337 TestMultiNode/serial/DeleteNode 2.49
338 TestMultiNode/serial/StopMultiNode 167.46
339 TestMultiNode/serial/RestartMultiNode 113.3
340 TestMultiNode/serial/ValidateNameConflict 38.27
347 TestScheduledStopUnix 107.6
351 TestRunningBinaryUpgrade 394.36
353 TestKubernetesUpgrade 171.59
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
357 TestNoKubernetes/serial/StartWithK8s 76.85
358 TestNoKubernetes/serial/StartWithStopK8s 29.7
366 TestNetworkPlugins/group/false 5.2
370 TestISOImage/Setup 29.42
371 TestNoKubernetes/serial/Start 37.99
373 TestISOImage/Binaries/crictl 0.19
374 TestISOImage/Binaries/curl 0.2
375 TestISOImage/Binaries/docker 0.19
376 TestISOImage/Binaries/git 0.19
377 TestISOImage/Binaries/iptables 0.18
378 TestISOImage/Binaries/podman 0.2
379 TestISOImage/Binaries/rsync 0.18
380 TestISOImage/Binaries/socat 0.19
381 TestISOImage/Binaries/wget 0.21
382 TestISOImage/Binaries/VBoxControl 0.2
383 TestISOImage/Binaries/VBoxService 0.19
384 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
385 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
386 TestNoKubernetes/serial/ProfileList 25.05
387 TestNoKubernetes/serial/Stop 1.33
388 TestNoKubernetes/serial/StartNoArgs 18.32
389 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.18
390 TestStoppedBinaryUpgrade/Setup 3.76
391 TestStoppedBinaryUpgrade/Upgrade 98.51
400 TestPause/serial/Start 86.58
401 TestStoppedBinaryUpgrade/MinikubeLogs 1.22
402 TestNetworkPlugins/group/auto/Start 57.3
403 TestPause/serial/SecondStartNoReconfiguration 39.68
404 TestNetworkPlugins/group/auto/KubeletFlags 0.2
405 TestNetworkPlugins/group/auto/NetCatPod 11.27
406 TestPause/serial/Pause 0.72
407 TestPause/serial/VerifyStatus 0.23
408 TestNetworkPlugins/group/kindnet/Start 56.44
409 TestPause/serial/Unpause 0.69
410 TestPause/serial/PauseAgain 0.75
411 TestPause/serial/DeletePaused 0.93
412 TestPause/serial/VerifyDeletedResources 0.55
413 TestNetworkPlugins/group/calico/Start 101.4
414 TestNetworkPlugins/group/auto/DNS 0.17
415 TestNetworkPlugins/group/auto/Localhost 0.12
416 TestNetworkPlugins/group/auto/HairPin 0.13
417 TestNetworkPlugins/group/custom-flannel/Start 89.14
418 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
419 TestNetworkPlugins/group/kindnet/KubeletFlags 0.48
420 TestNetworkPlugins/group/kindnet/NetCatPod 12.41
421 TestNetworkPlugins/group/kindnet/DNS 0.19
422 TestNetworkPlugins/group/kindnet/Localhost 0.15
423 TestNetworkPlugins/group/kindnet/HairPin 0.15
424 TestNetworkPlugins/group/enable-default-cni/Start 89.89
425 TestNetworkPlugins/group/flannel/Start 85.61
426 TestNetworkPlugins/group/calico/ControllerPod 6.01
427 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
428 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.29
429 TestNetworkPlugins/group/calico/KubeletFlags 0.22
430 TestNetworkPlugins/group/calico/NetCatPod 13.32
431 TestNetworkPlugins/group/custom-flannel/DNS 0.16
432 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
433 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
434 TestNetworkPlugins/group/calico/DNS 0.16
435 TestNetworkPlugins/group/calico/Localhost 0.14
436 TestNetworkPlugins/group/calico/HairPin 0.13
437 TestNetworkPlugins/group/bridge/Start 86.66
439 TestStartStop/group/old-k8s-version/serial/FirstStart 74.87
440 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.19
441 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.26
442 TestNetworkPlugins/group/flannel/ControllerPod 6.01
443 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
444 TestNetworkPlugins/group/flannel/NetCatPod 11.26
445 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
446 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
447 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
448 TestNetworkPlugins/group/flannel/DNS 0.17
449 TestNetworkPlugins/group/flannel/Localhost 0.16
450 TestNetworkPlugins/group/flannel/HairPin 0.22
452 TestStartStop/group/no-preload/serial/FirstStart 95
454 TestStartStop/group/embed-certs/serial/FirstStart 101.71
455 TestStartStop/group/old-k8s-version/serial/DeployApp 11.37
456 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
457 TestNetworkPlugins/group/bridge/NetCatPod 10.26
458 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.37
459 TestStartStop/group/old-k8s-version/serial/Stop 70.12
460 TestNetworkPlugins/group/bridge/DNS 0.16
461 TestNetworkPlugins/group/bridge/Localhost 0.13
462 TestNetworkPlugins/group/bridge/HairPin 0.14
464 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.11
465 TestStartStop/group/no-preload/serial/DeployApp 12.32
466 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
467 TestStartStop/group/old-k8s-version/serial/SecondStart 42.07
468 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
469 TestStartStop/group/no-preload/serial/Stop 76.97
470 TestStartStop/group/embed-certs/serial/DeployApp 12.3
471 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.09
472 TestStartStop/group/embed-certs/serial/Stop 86.81
473 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.28
474 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.01
475 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
476 TestStartStop/group/default-k8s-diff-port/serial/Stop 75.59
477 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
478 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
479 TestStartStop/group/old-k8s-version/serial/Pause 2.46
481 TestStartStop/group/newest-cni/serial/FirstStart 40.37
482 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
483 TestStartStop/group/no-preload/serial/SecondStart 51.65
484 TestStartStop/group/newest-cni/serial/DeployApp 0
485 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.98
486 TestStartStop/group/newest-cni/serial/Stop 7.24
487 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
488 TestStartStop/group/embed-certs/serial/SecondStart 44.92
489 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
490 TestStartStop/group/newest-cni/serial/SecondStart 85.79
491 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
492 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 66.03
493 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7.01
494 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
495 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
496 TestStartStop/group/no-preload/serial/Pause 3.58
498 TestISOImage/PersistentMounts//data 0.2
499 TestISOImage/PersistentMounts//var/lib/docker 0.17
500 TestISOImage/PersistentMounts//var/lib/cni 0.2
501 TestISOImage/PersistentMounts//var/lib/kubelet 0.19
502 TestISOImage/PersistentMounts//var/lib/minikube 0.18
503 TestISOImage/PersistentMounts//var/lib/toolbox 0.17
504 TestISOImage/PersistentMounts//var/lib/boot2docker 0.17
505 TestISOImage/VersionJSON 0.2
506 TestISOImage/eBPFSupport 0.19
507 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.01
508 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.21
509 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
510 TestStartStop/group/embed-certs/serial/Pause 2.65
511 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.01
512 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
513 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
514 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.2
515 TestStartStop/group/newest-cni/serial/Pause 2.26
516 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
517 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.19
518 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.36
x
+
TestDownloadOnly/v1.28.0/json-events (27.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-442566 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-442566 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (27.987282484s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (27.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1208 03:39:25.725849  129900 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1208 03:39:25.725998  129900 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21409-125868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-442566
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-442566: exit status 85 (83.672916ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-442566 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-442566 │ jenkins │ v1.37.0 │ 08 Dec 25 03:38 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 03:38:57
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 03:38:57.795802  129913 out.go:360] Setting OutFile to fd 1 ...
	I1208 03:38:57.796071  129913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:38:57.796079  129913 out.go:374] Setting ErrFile to fd 2...
	I1208 03:38:57.796083  129913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:38:57.796318  129913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	W1208 03:38:57.796433  129913 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21409-125868/.minikube/config/config.json: open /home/jenkins/minikube-integration/21409-125868/.minikube/config/config.json: no such file or directory
	I1208 03:38:57.796970  129913 out.go:368] Setting JSON to true
	I1208 03:38:57.798540  129913 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1282,"bootTime":1765163856,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 03:38:57.798606  129913 start.go:143] virtualization: kvm guest
	I1208 03:38:57.802416  129913 out.go:99] [download-only-442566] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1208 03:38:57.802605  129913 notify.go:221] Checking for updates...
	W1208 03:38:57.802663  129913 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21409-125868/.minikube/cache/preloaded-tarball: no such file or directory
	I1208 03:38:57.803778  129913 out.go:171] MINIKUBE_LOCATION=21409
	I1208 03:38:57.804973  129913 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 03:38:57.806269  129913 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	I1208 03:38:57.807383  129913 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	I1208 03:38:57.808555  129913 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1208 03:38:57.810656  129913 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 03:38:57.811002  129913 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 03:38:58.296870  129913 out.go:99] Using the kvm2 driver based on user configuration
	I1208 03:38:58.296924  129913 start.go:309] selected driver: kvm2
	I1208 03:38:58.296938  129913 start.go:927] validating driver "kvm2" against <nil>
	I1208 03:38:58.297325  129913 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 03:38:58.297912  129913 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1208 03:38:58.298115  129913 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1208 03:38:58.298151  129913 cni.go:84] Creating CNI manager for ""
	I1208 03:38:58.298214  129913 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 03:38:58.298228  129913 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1208 03:38:58.298296  129913 start.go:353] cluster config:
	{Name:download-only-442566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-442566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 03:38:58.298534  129913 iso.go:125] acquiring lock: {Name:mkd550ce23b107beb8be7edee8182e09aac2818e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 03:38:58.299993  129913 out.go:99] Downloading VM boot image ...
	I1208 03:38:58.300048  129913 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21409-125868/.minikube/cache/iso/amd64/minikube-v1.37.0-1765151505-21409-amd64.iso
	I1208 03:39:10.983925  129913 out.go:99] Starting "download-only-442566" primary control-plane node in "download-only-442566" cluster
	I1208 03:39:10.983962  129913 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1208 03:39:11.089067  129913 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1208 03:39:11.089098  129913 cache.go:65] Caching tarball of preloaded images
	I1208 03:39:11.089994  129913 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1208 03:39:11.091499  129913 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1208 03:39:11.091519  129913 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1208 03:39:11.197713  129913 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1208 03:39:11.197836  129913 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21409-125868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-442566 host does not exist
	  To start a cluster, run: "minikube start -p download-only-442566"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-442566
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (11.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-038326 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-038326 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.492691125s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (11.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1208 03:39:37.636148  129900 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1208 03:39:37.636189  129900 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21409-125868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-038326
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-038326: exit status 85 (75.456784ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-442566 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-442566 │ jenkins │ v1.37.0 │ 08 Dec 25 03:38 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │ 08 Dec 25 03:39 UTC │
	│ delete  │ -p download-only-442566                                                                                                                                                 │ download-only-442566 │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │ 08 Dec 25 03:39 UTC │
	│ start   │ -o=json --download-only -p download-only-038326 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-038326 │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 03:39:26
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 03:39:26.201844  130179 out.go:360] Setting OutFile to fd 1 ...
	I1208 03:39:26.201988  130179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:39:26.201998  130179 out.go:374] Setting ErrFile to fd 2...
	I1208 03:39:26.202003  130179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:39:26.202271  130179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	I1208 03:39:26.202817  130179 out.go:368] Setting JSON to true
	I1208 03:39:26.203774  130179 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1310,"bootTime":1765163856,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 03:39:26.203859  130179 start.go:143] virtualization: kvm guest
	I1208 03:39:26.205638  130179 out.go:99] [download-only-038326] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1208 03:39:26.205866  130179 notify.go:221] Checking for updates...
	I1208 03:39:26.206960  130179 out.go:171] MINIKUBE_LOCATION=21409
	I1208 03:39:26.208213  130179 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 03:39:26.209480  130179 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	I1208 03:39:26.210669  130179 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	I1208 03:39:26.211754  130179 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1208 03:39:26.213759  130179 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 03:39:26.214109  130179 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 03:39:26.250560  130179 out.go:99] Using the kvm2 driver based on user configuration
	I1208 03:39:26.250608  130179 start.go:309] selected driver: kvm2
	I1208 03:39:26.250618  130179 start.go:927] validating driver "kvm2" against <nil>
	I1208 03:39:26.250970  130179 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 03:39:26.251562  130179 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1208 03:39:26.251699  130179 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1208 03:39:26.251729  130179 cni.go:84] Creating CNI manager for ""
	I1208 03:39:26.251776  130179 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 03:39:26.251785  130179 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1208 03:39:26.251829  130179 start.go:353] cluster config:
	{Name:download-only-038326 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-038326 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 03:39:26.251980  130179 iso.go:125] acquiring lock: {Name:mkd550ce23b107beb8be7edee8182e09aac2818e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 03:39:26.253331  130179 out.go:99] Starting "download-only-038326" primary control-plane node in "download-only-038326" cluster
	I1208 03:39:26.253365  130179 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 03:39:26.383080  130179 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1208 03:39:26.383140  130179 cache.go:65] Caching tarball of preloaded images
	I1208 03:39:26.383392  130179 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1208 03:39:26.385264  130179 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1208 03:39:26.385285  130179 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1208 03:39:26.497277  130179 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1208 03:39:26.497326  130179 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/21409-125868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-038326 host does not exist
	  To start a cluster, run: "minikube start -p download-only-038326"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-038326
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (12.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-232951 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-232951 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.343637529s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (12.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1208 03:39:50.375340  129900 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1208 03:39:50.375385  129900 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21409-125868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-232951
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-232951: exit status 85 (79.742581ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-442566 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-442566 │ jenkins │ v1.37.0 │ 08 Dec 25 03:38 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │ 08 Dec 25 03:39 UTC │
	│ delete  │ -p download-only-442566                                                                                                                                                        │ download-only-442566 │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │ 08 Dec 25 03:39 UTC │
	│ start   │ -o=json --download-only -p download-only-038326 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-038326 │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │ 08 Dec 25 03:39 UTC │
	│ delete  │ -p download-only-038326                                                                                                                                                        │ download-only-038326 │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │ 08 Dec 25 03:39 UTC │
	│ start   │ -o=json --download-only -p download-only-232951 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-232951 │ jenkins │ v1.37.0 │ 08 Dec 25 03:39 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/08 03:39:38
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1208 03:39:38.086293  130391 out.go:360] Setting OutFile to fd 1 ...
	I1208 03:39:38.086416  130391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:39:38.086422  130391 out.go:374] Setting ErrFile to fd 2...
	I1208 03:39:38.086428  130391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:39:38.086656  130391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	I1208 03:39:38.087196  130391 out.go:368] Setting JSON to true
	I1208 03:39:38.088118  130391 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1322,"bootTime":1765163856,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 03:39:38.088179  130391 start.go:143] virtualization: kvm guest
	I1208 03:39:38.090118  130391 out.go:99] [download-only-232951] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1208 03:39:38.090841  130391 notify.go:221] Checking for updates...
	I1208 03:39:38.092310  130391 out.go:171] MINIKUBE_LOCATION=21409
	I1208 03:39:38.093540  130391 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 03:39:38.094662  130391 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	I1208 03:39:38.095747  130391 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	I1208 03:39:38.096798  130391 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1208 03:39:38.098739  130391 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1208 03:39:38.098989  130391 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 03:39:38.129601  130391 out.go:99] Using the kvm2 driver based on user configuration
	I1208 03:39:38.129633  130391 start.go:309] selected driver: kvm2
	I1208 03:39:38.129640  130391 start.go:927] validating driver "kvm2" against <nil>
	I1208 03:39:38.129995  130391 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1208 03:39:38.130535  130391 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1208 03:39:38.130732  130391 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1208 03:39:38.130768  130391 cni.go:84] Creating CNI manager for ""
	I1208 03:39:38.130823  130391 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1208 03:39:38.130835  130391 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1208 03:39:38.130911  130391 start.go:353] cluster config:
	{Name:download-only-232951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-232951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 03:39:38.131020  130391 iso.go:125] acquiring lock: {Name:mkd550ce23b107beb8be7edee8182e09aac2818e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1208 03:39:38.132314  130391 out.go:99] Starting "download-only-232951" primary control-plane node in "download-only-232951" cluster
	I1208 03:39:38.132338  130391 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 03:39:38.234681  130391 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1208 03:39:38.234721  130391 cache.go:65] Caching tarball of preloaded images
	I1208 03:39:38.234956  130391 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1208 03:39:38.236573  130391 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1208 03:39:38.236600  130391 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1208 03:39:38.344048  130391 preload.go:295] Got checksum from GCS API "b4861df7675d96066744278d08e2cd35"
	I1208 03:39:38.344102  130391 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:b4861df7675d96066744278d08e2cd35 -> /home/jenkins/minikube-integration/21409-125868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-232951 host does not exist
	  To start a cluster, run: "minikube start -p download-only-232951"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-232951
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.7s)

                                                
                                                
=== RUN   TestBinaryMirror
I1208 03:39:51.234214  129900 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-485333 --alsologtostderr --binary-mirror http://127.0.0.1:38203 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-485333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-485333
--- PASS: TestBinaryMirror (0.70s)

                                                
                                    
x
+
TestOffline (78.57s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-129526 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-129526 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m16.99123785s)
helpers_test.go:175: Cleaning up "offline-crio-129526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-129526
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-129526: (1.574977815s)
--- PASS: TestOffline (78.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-301052
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-301052: exit status 85 (74.451977ms)

                                                
                                                
-- stdout --
	* Profile "addons-301052" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-301052"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-301052
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-301052: exit status 85 (69.559679ms)

                                                
                                                
-- stdout --
	* Profile "addons-301052" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-301052"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (129.34s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-301052 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-301052 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m9.34141952s)
--- PASS: TestAddons/Setup (129.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-301052 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-301052 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.54s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-301052 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-301052 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [57b67a40-1452-43b4-aa1c-f17676388dbf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [57b67a40-1452-43b4-aa1c-f17676388dbf] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004574327s
addons_test.go:694: (dbg) Run:  kubectl --context addons-301052 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-301052 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-301052 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.175201ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-x2jlk" [2677ee6f-628d-45bb-8063-6506be08f92d] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004125074s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-hvm6b" [f7e79ab0-0bb7-45ac-8393-2235e332d198] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004234372s
addons_test.go:392: (dbg) Run:  kubectl --context addons-301052 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-301052 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-301052 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.5465696s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 ip
2025/12/08 03:42:39 [DEBUG] GET http://192.168.39.103:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.37s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.73s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 8.89241ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-301052
addons_test.go:332: (dbg) Run:  kubectl --context addons-301052 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.73s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.93s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-cd4jw" [17ace6c2-32cc-404a-9f14-50c843916fa7] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007665597s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-301052 addons disable inspektor-gadget --alsologtostderr -v=1: (5.921755243s)
--- PASS: TestAddons/parallel/InspektorGadget (10.93s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.98s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 13.460744ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-wrtw9" [4ad74736-ec20-4605-87d5-2e4b3f380f89] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00380387s
addons_test.go:463: (dbg) Run:  kubectl --context addons-301052 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.98s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.02s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1208 03:42:33.687523  129900 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1208 03:42:33.694150  129900 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1208 03:42:33.694177  129900 kapi.go:107] duration metric: took 6.671399ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.682322ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-301052 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-301052 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [4ae53fde-9b16-4dba-b92f-6d8f85f0d588] Pending
helpers_test.go:352: "task-pv-pod" [4ae53fde-9b16-4dba-b92f-6d8f85f0d588] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [4ae53fde-9b16-4dba-b92f-6d8f85f0d588] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.003828027s
addons_test.go:572: (dbg) Run:  kubectl --context addons-301052 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-301052 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-301052 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-301052 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-301052 delete pod task-pv-pod: (1.499287791s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-301052 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-301052 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-301052 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [cecc62b4-2e82-440a-86bb-032dffed1826] Pending
helpers_test.go:352: "task-pv-pod-restore" [cecc62b4-2e82-440a-86bb-032dffed1826] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [cecc62b4-2e82-440a-86bb-032dffed1826] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004292349s
addons_test.go:614: (dbg) Run:  kubectl --context addons-301052 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-301052 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-301052 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-301052 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.728616888s)
--- PASS: TestAddons/parallel/CSI (57.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-301052 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-301052 --alsologtostderr -v=1: (1.093164826s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-zc8mt" [df08861d-f6d2-4202-be43-eeace0e86b05] Pending
helpers_test.go:352: "headlamp-dfcdc64b-zc8mt" [df08861d-f6d2-4202-be43-eeace0e86b05] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-zc8mt" [df08861d-f6d2-4202-be43-eeace0e86b05] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.004667878s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-301052 addons disable headlamp --alsologtostderr -v=1: (5.774667083s)
--- PASS: TestAddons/parallel/Headlamp (22.87s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-55phg" [766fe113-fd9b-4b77-9f82-7aa4d67d98e5] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003882237s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.88s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-301052 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-301052 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-301052 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [b689466a-64fd-40d2-ba43-2c697f9c565f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [b689466a-64fd-40d2-ba43-2c697f9c565f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [b689466a-64fd-40d2-ba43-2c697f9c565f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.00670596s
addons_test.go:967: (dbg) Run:  kubectl --context addons-301052 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 ssh "cat /opt/local-path-provisioner/pvc-7dfb495a-6399-4db8-a94c-9302cbd53b7e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-301052 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-301052 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-301052 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.903410436s)
--- PASS: TestAddons/parallel/LocalPath (58.88s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-fvs49" [c8a1320a-c4e2-4f2a-be37-56390e503e79] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006756937s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.00s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-2jkhs" [0d583c14-b7bb-4e64-947e-411d0ab8345b] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004671762s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-301052 addons disable yakd --alsologtostderr -v=1: (5.780571147s)
--- PASS: TestAddons/parallel/Yakd (11.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (71.84s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-301052
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-301052: (1m11.634799674s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-301052
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-301052
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-301052
--- PASS: TestAddons/StoppedEnableDisable (71.84s)

                                                
                                    
x
+
TestCertOptions (76.84s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-238906 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-238906 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m15.568808549s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-238906 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-238906 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-238906 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-238906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-238906
--- PASS: TestCertOptions (76.84s)

                                                
                                    
x
+
TestCertExpiration (290.56s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-266977 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-266977 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (58.673343681s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-266977 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-266977 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (50.96715184s)
helpers_test.go:175: Cleaning up "cert-expiration-266977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-266977
--- PASS: TestCertExpiration (290.56s)

                                                
                                    
x
+
TestForceSystemdFlag (54.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-166300 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1208 04:44:36.252688  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-166300 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (53.281074401s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-166300 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-166300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-166300
--- PASS: TestForceSystemdFlag (54.46s)

                                                
                                    
x
+
TestForceSystemdEnv (67.66s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-067979 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-067979 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m6.703582681s)
helpers_test.go:175: Cleaning up "force-systemd-env-067979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-067979
--- PASS: TestForceSystemdEnv (67.66s)

                                                
                                    
x
+
TestErrorSpam/setup (35.44s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-645143 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-645143 --driver=kvm2  --container-runtime=crio
E1208 03:47:02.004260  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:47:02.010739  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:47:02.022150  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:47:02.043554  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:47:02.084986  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:47:02.166533  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:47:02.328214  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:47:02.649983  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:47:03.292114  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:47:04.574167  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:47:07.135628  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-645143 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-645143 --driver=kvm2  --container-runtime=crio: (35.437744388s)
--- PASS: TestErrorSpam/setup (35.44s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 status
--- PASS: TestErrorSpam/status (0.64s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 unpause
--- PASS: TestErrorSpam/unpause (1.58s)

                                                
                                    
x
+
TestErrorSpam/stop (5.22s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 stop
E1208 03:47:12.257764  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 stop: (1.989991089s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 stop: (2.038223002s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-645143 --log_dir /tmp/nospam-645143 stop: (1.191314156s)
--- PASS: TestErrorSpam/stop (5.22s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21409-125868/.minikube/files/etc/test/nested/copy/129900/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.1s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-194253 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1208 03:47:22.499525  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:47:42.981138  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-194253 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (50.100128545s)
--- PASS: TestFunctional/serial/StartWithProxy (50.10s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.4s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1208 03:48:07.972489  129900 config.go:182] Loaded profile config "functional-194253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-194253 --alsologtostderr -v=8
E1208 03:48:23.943109  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-194253 --alsologtostderr -v=8: (35.403050027s)
functional_test.go:678: soft start took 35.403797352s for "functional-194253" cluster.
I1208 03:48:43.375948  129900 config.go:182] Loaded profile config "functional-194253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (35.40s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-194253 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-194253 cache add registry.k8s.io/pause:3.1: (1.012855177s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-194253 cache add registry.k8s.io/pause:3.3: (1.021619052s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-194253 cache add registry.k8s.io/pause:latest: (1.060972464s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-194253 /tmp/TestFunctionalserialCacheCmdcacheadd_local2183622811/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 cache add minikube-local-cache-test:functional-194253
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-194253 cache add minikube-local-cache-test:functional-194253: (1.885162487s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 cache delete minikube-local-cache-test:functional-194253
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-194253
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-194253 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (182.17964ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 kubectl -- --context functional-194253 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-194253 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.55s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-194253 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-194253 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.549392116s)
functional_test.go:776: restart took 38.549530686s for "functional-194253" cluster.
I1208 03:49:29.508873  129900 config.go:182] Loaded profile config "functional-194253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (38.55s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-194253 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-194253 logs: (1.2066269s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 logs --file /tmp/TestFunctionalserialLogsFileCmd1106723280/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-194253 logs --file /tmp/TestFunctionalserialLogsFileCmd1106723280/001/logs.txt: (1.200581859s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-194253 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-194253
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-194253: exit status 115 (242.990269ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.115:31348 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-194253 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-194253 config get cpus: exit status 14 (81.050082ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-194253 config get cpus: exit status 14 (62.891486ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (43.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-194253 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-194253 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 136086: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (43.83s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-194253 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-194253 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (113.241772ms)

                                                
                                                
-- stdout --
	* [functional-194253] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 03:49:37.305044  135350 out.go:360] Setting OutFile to fd 1 ...
	I1208 03:49:37.305334  135350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:49:37.305345  135350 out.go:374] Setting ErrFile to fd 2...
	I1208 03:49:37.305349  135350 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:49:37.305559  135350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	I1208 03:49:37.305993  135350 out.go:368] Setting JSON to false
	I1208 03:49:37.306968  135350 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1921,"bootTime":1765163856,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 03:49:37.307022  135350 start.go:143] virtualization: kvm guest
	I1208 03:49:37.308968  135350 out.go:179] * [functional-194253] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1208 03:49:37.310133  135350 out.go:179]   - MINIKUBE_LOCATION=21409
	I1208 03:49:37.310155  135350 notify.go:221] Checking for updates...
	I1208 03:49:37.312378  135350 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 03:49:37.313524  135350 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	I1208 03:49:37.314716  135350 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	I1208 03:49:37.315858  135350 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 03:49:37.316837  135350 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 03:49:37.318319  135350 config.go:182] Loaded profile config "functional-194253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 03:49:37.318805  135350 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 03:49:37.349306  135350 out.go:179] * Using the kvm2 driver based on existing profile
	I1208 03:49:37.350388  135350 start.go:309] selected driver: kvm2
	I1208 03:49:37.350400  135350 start.go:927] validating driver "kvm2" against &{Name:functional-194253 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-194253 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 03:49:37.350496  135350 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 03:49:37.352469  135350 out.go:203] 
	W1208 03:49:37.353657  135350 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1208 03:49:37.354749  135350 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-194253 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-194253 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-194253 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (117.375351ms)

                                                
                                                
-- stdout --
	* [functional-194253] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 03:49:37.534546  135382 out.go:360] Setting OutFile to fd 1 ...
	I1208 03:49:37.534817  135382 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:49:37.534827  135382 out.go:374] Setting ErrFile to fd 2...
	I1208 03:49:37.534832  135382 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:49:37.535205  135382 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	I1208 03:49:37.535695  135382 out.go:368] Setting JSON to false
	I1208 03:49:37.536652  135382 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1921,"bootTime":1765163856,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 03:49:37.536717  135382 start.go:143] virtualization: kvm guest
	I1208 03:49:37.538396  135382 out.go:179] * [functional-194253] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1208 03:49:37.539747  135382 out.go:179]   - MINIKUBE_LOCATION=21409
	I1208 03:49:37.539758  135382 notify.go:221] Checking for updates...
	I1208 03:49:37.542001  135382 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 03:49:37.543230  135382 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	I1208 03:49:37.544462  135382 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	I1208 03:49:37.545458  135382 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 03:49:37.546424  135382 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 03:49:37.548036  135382 config.go:182] Loaded profile config "functional-194253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 03:49:37.548704  135382 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 03:49:37.580801  135382 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1208 03:49:37.581751  135382 start.go:309] selected driver: kvm2
	I1208 03:49:37.581766  135382 start.go:927] validating driver "kvm2" against &{Name:functional-194253 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-194253 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.115 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 03:49:37.581871  135382 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 03:49:37.583616  135382 out.go:203] 
	W1208 03:49:37.584706  135382 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1208 03:49:37.585711  135382 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-194253 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-194253 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-2l889" [0c0d419a-7f7e-4695-a6c5-b67e7639b806] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-2l889" [0c0d419a-7f7e-4695-a6c5-b67e7639b806] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.004538595s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.115:30916
functional_test.go:1680: http://192.168.39.115:30916: success! body:
Request served by hello-node-connect-7d85dfc575-2l889

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.115:30916
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.52s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [d1ed4c9e-3800-4b6a-a706-0b65d33d7eb6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004243314s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-194253 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-194253 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-194253 get pvc myclaim -o=json
I1208 03:49:43.220232  129900 retry.go:31] will retry after 2.555221342s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:16c4b389-a926-4ce7-9d8d-51e3a6d09485 ResourceVersion:694 Generation:0 CreationTimestamp:2025-12-08 03:49:43 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001a6ec10 VolumeMode:0xc001a6ec20 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-194253 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-194253 apply -f testdata/storage-provisioner/pod.yaml
E1208 03:49:45.865358  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [cfeae7f2-2af4-498b-b4e6-0787ac6dd40a] Pending
helpers_test.go:352: "sp-pod" [cfeae7f2-2af4-498b-b4e6-0787ac6dd40a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [cfeae7f2-2af4-498b-b4e6-0787ac6dd40a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.007295022s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-194253 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-194253 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-194253 delete -f testdata/storage-provisioner/pod.yaml: (4.302895718s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-194253 apply -f testdata/storage-provisioner/pod.yaml
I1208 03:50:05.655626  129900 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d4768f92-8685-46f5-8858-4f6397c1ce46] Pending
helpers_test.go:352: "sp-pod" [d4768f92-8685-46f5-8858-4f6397c1ce46] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [d4768f92-8685-46f5-8858-4f6397c1ce46] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.00395026s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-194253 exec sp-pod -- ls /tmp/mount
2025/12/08 03:50:32 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.84s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh -n functional-194253 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 cp functional-194253:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3102288983/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh -n functional-194253 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh -n functional-194253 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-194253 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-vttgj" [ed42ec50-f79b-40b1-949e-435be2f008c4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-vttgj" [ed42ec50-f79b-40b1-949e-435be2f008c4] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.006738194s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-194253 exec mysql-5bb876957f-vttgj -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-194253 exec mysql-5bb876957f-vttgj -- mysql -ppassword -e "show databases;": exit status 1 (274.228666ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1208 03:50:11.211172  129900 retry.go:31] will retry after 1.360623963s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-194253 exec mysql-5bb876957f-vttgj -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-194253 exec mysql-5bb876957f-vttgj -- mysql -ppassword -e "show databases;": exit status 1 (181.119365ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1208 03:50:12.753847  129900 retry.go:31] will retry after 2.201200223s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-194253 exec mysql-5bb876957f-vttgj -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.64s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/129900/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "sudo cat /etc/test/nested/copy/129900/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/129900.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "sudo cat /etc/ssl/certs/129900.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/129900.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "sudo cat /usr/share/ca-certificates/129900.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1299002.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "sudo cat /etc/ssl/certs/1299002.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1299002.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "sudo cat /usr/share/ca-certificates/1299002.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-194253 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-194253 ssh "sudo systemctl is-active docker": exit status 1 (169.498547ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-194253 ssh "sudo systemctl is-active containerd": exit status 1 (170.991675ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-194253 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-194253 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-qcdq5" [632bc562-ed4e-4bda-88bf-fd4c5952da82] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-qcdq5" [632bc562-ed4e-4bda-88bf-fd4c5952da82] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004957325s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "247.472001ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "61.928148ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "237.786667ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "62.410821ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-194253 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-194253
localhost/kicbase/echo-server:functional-194253
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-194253 image ls --format short --alsologtostderr:
I1208 03:50:15.497630  136468 out.go:360] Setting OutFile to fd 1 ...
I1208 03:50:15.497778  136468 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:50:15.497793  136468 out.go:374] Setting ErrFile to fd 2...
I1208 03:50:15.497800  136468 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:50:15.498189  136468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
I1208 03:50:15.499094  136468 config.go:182] Loaded profile config "functional-194253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 03:50:15.499283  136468 config.go:182] Loaded profile config "functional-194253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 03:50:15.502278  136468 ssh_runner.go:195] Run: systemctl --version
I1208 03:50:15.505605  136468 main.go:143] libmachine: domain functional-194253 has defined MAC address 52:54:00:8f:fe:0a in network mk-functional-194253
I1208 03:50:15.506096  136468 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8f:fe:0a", ip: ""} in network mk-functional-194253: {Iface:virbr1 ExpiryTime:2025-12-08 04:47:32 +0000 UTC Type:0 Mac:52:54:00:8f:fe:0a Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:functional-194253 Clientid:01:52:54:00:8f:fe:0a}
I1208 03:50:15.506123  136468 main.go:143] libmachine: domain functional-194253 has defined IP address 192.168.39.115 and MAC address 52:54:00:8f:fe:0a in network mk-functional-194253
I1208 03:50:15.506291  136468 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/functional-194253/id_rsa Username:docker}
I1208 03:50:15.612837  136468 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-194253 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-194253  │ 2a4865958a687 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-194253  │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-194253 image ls --format table --alsologtostderr:
I1208 03:50:16.032685  136517 out.go:360] Setting OutFile to fd 1 ...
I1208 03:50:16.032814  136517 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:50:16.032825  136517 out.go:374] Setting ErrFile to fd 2...
I1208 03:50:16.032830  136517 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:50:16.033078  136517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
I1208 03:50:16.033757  136517 config.go:182] Loaded profile config "functional-194253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 03:50:16.033871  136517 config.go:182] Loaded profile config "functional-194253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 03:50:16.036064  136517 ssh_runner.go:195] Run: systemctl --version
I1208 03:50:16.038374  136517 main.go:143] libmachine: domain functional-194253 has defined MAC address 52:54:00:8f:fe:0a in network mk-functional-194253
I1208 03:50:16.038800  136517 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8f:fe:0a", ip: ""} in network mk-functional-194253: {Iface:virbr1 ExpiryTime:2025-12-08 04:47:32 +0000 UTC Type:0 Mac:52:54:00:8f:fe:0a Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:functional-194253 Clientid:01:52:54:00:8f:fe:0a}
I1208 03:50:16.038829  136517 main.go:143] libmachine: domain functional-194253 has defined IP address 192.168.39.115 and MAC address 52:54:00:8f:fe:0a in network mk-functional-194253
I1208 03:50:16.039019  136517 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/functional-194253/id_rsa Username:docker}
I1208 03:50:16.146588  136517 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-194253 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"2a4865958a687c30c2e879bf870a59e4d537b068b0a1b1a35810d30f71098580","repoDigests":["localhost/minikube-local-cache-test@sha256:796b45a805cdb32b4b7455fbc3e0ee088f16f285d93c1098202fc6399b10eca9"],"repoTags":["localhost/minikube-local-cache-test:functional-194253"],"size":"3330"},{"id":"a5f569d49a
979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e0
4303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-194253"],"size":"4944818"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha2
56:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.
k8s.io/pause:3.10.1"],"size":"742092"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e
51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83a
ab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-194253 image ls --format json --alsologtostderr:
I1208 03:50:15.786206  136492 out.go:360] Setting OutFile to fd 1 ...
I1208 03:50:15.786345  136492 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:50:15.786357  136492 out.go:374] Setting ErrFile to fd 2...
I1208 03:50:15.786365  136492 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:50:15.786691  136492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
I1208 03:50:15.787565  136492 config.go:182] Loaded profile config "functional-194253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 03:50:15.787737  136492 config.go:182] Loaded profile config "functional-194253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 03:50:15.790396  136492 ssh_runner.go:195] Run: systemctl --version
I1208 03:50:15.793049  136492 main.go:143] libmachine: domain functional-194253 has defined MAC address 52:54:00:8f:fe:0a in network mk-functional-194253
I1208 03:50:15.793553  136492 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8f:fe:0a", ip: ""} in network mk-functional-194253: {Iface:virbr1 ExpiryTime:2025-12-08 04:47:32 +0000 UTC Type:0 Mac:52:54:00:8f:fe:0a Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:functional-194253 Clientid:01:52:54:00:8f:fe:0a}
I1208 03:50:15.793598  136492 main.go:143] libmachine: domain functional-194253 has defined IP address 192.168.39.115 and MAC address 52:54:00:8f:fe:0a in network mk-functional-194253
I1208 03:50:15.793797  136492 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/functional-194253/id_rsa Username:docker}
I1208 03:50:15.910793  136492 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-194253 image ls --format yaml --alsologtostderr:
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-194253
size: "4944818"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 2a4865958a687c30c2e879bf870a59e4d537b068b0a1b1a35810d30f71098580
repoDigests:
- localhost/minikube-local-cache-test@sha256:796b45a805cdb32b4b7455fbc3e0ee088f16f285d93c1098202fc6399b10eca9
repoTags:
- localhost/minikube-local-cache-test:functional-194253
size: "3330"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-194253 image ls --format yaml --alsologtostderr:
I1208 03:50:15.500181  136474 out.go:360] Setting OutFile to fd 1 ...
I1208 03:50:15.500290  136474 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:50:15.500300  136474 out.go:374] Setting ErrFile to fd 2...
I1208 03:50:15.500307  136474 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:50:15.500609  136474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
I1208 03:50:15.501397  136474 config.go:182] Loaded profile config "functional-194253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 03:50:15.501543  136474 config.go:182] Loaded profile config "functional-194253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 03:50:15.503970  136474 ssh_runner.go:195] Run: systemctl --version
I1208 03:50:15.507012  136474 main.go:143] libmachine: domain functional-194253 has defined MAC address 52:54:00:8f:fe:0a in network mk-functional-194253
I1208 03:50:15.507515  136474 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8f:fe:0a", ip: ""} in network mk-functional-194253: {Iface:virbr1 ExpiryTime:2025-12-08 04:47:32 +0000 UTC Type:0 Mac:52:54:00:8f:fe:0a Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:functional-194253 Clientid:01:52:54:00:8f:fe:0a}
I1208 03:50:15.507559  136474 main.go:143] libmachine: domain functional-194253 has defined IP address 192.168.39.115 and MAC address 52:54:00:8f:fe:0a in network mk-functional-194253
I1208 03:50:15.507775  136474 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/functional-194253/id_rsa Username:docker}
I1208 03:50:15.628960  136474 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-194253 ssh pgrep buildkitd: exit status 1 (190.830802ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image build -t localhost/my-image:functional-194253 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-194253 image build -t localhost/my-image:functional-194253 testdata/build --alsologtostderr: (4.033527578s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-194253 image build -t localhost/my-image:functional-194253 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f0e7516cfe3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-194253
--> dec215468d5
Successfully tagged localhost/my-image:functional-194253
dec215468d5ee745661271629a8b3348f5f0df27cae0de2233e5a37652621e9e
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-194253 image build -t localhost/my-image:functional-194253 testdata/build --alsologtostderr:
I1208 03:50:15.961648  136506 out.go:360] Setting OutFile to fd 1 ...
I1208 03:50:15.961796  136506 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:50:15.961807  136506 out.go:374] Setting ErrFile to fd 2...
I1208 03:50:15.961814  136506 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:50:15.962056  136506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
I1208 03:50:15.962649  136506 config.go:182] Loaded profile config "functional-194253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 03:50:15.963412  136506 config.go:182] Loaded profile config "functional-194253": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1208 03:50:15.965970  136506 ssh_runner.go:195] Run: systemctl --version
I1208 03:50:15.968760  136506 main.go:143] libmachine: domain functional-194253 has defined MAC address 52:54:00:8f:fe:0a in network mk-functional-194253
I1208 03:50:15.969267  136506 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8f:fe:0a", ip: ""} in network mk-functional-194253: {Iface:virbr1 ExpiryTime:2025-12-08 04:47:32 +0000 UTC Type:0 Mac:52:54:00:8f:fe:0a Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:functional-194253 Clientid:01:52:54:00:8f:fe:0a}
I1208 03:50:15.969307  136506 main.go:143] libmachine: domain functional-194253 has defined IP address 192.168.39.115 and MAC address 52:54:00:8f:fe:0a in network mk-functional-194253
I1208 03:50:15.969470  136506 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/functional-194253/id_rsa Username:docker}
I1208 03:50:16.057728  136506 build_images.go:162] Building image from path: /tmp/build.4165692077.tar
I1208 03:50:16.057835  136506 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1208 03:50:16.078009  136506 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4165692077.tar
I1208 03:50:16.083525  136506 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4165692077.tar: stat -c "%s %y" /var/lib/minikube/build/build.4165692077.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4165692077.tar': No such file or directory
I1208 03:50:16.083565  136506 ssh_runner.go:362] scp /tmp/build.4165692077.tar --> /var/lib/minikube/build/build.4165692077.tar (3072 bytes)
I1208 03:50:16.149656  136506 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4165692077
I1208 03:50:16.175299  136506 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4165692077 -xf /var/lib/minikube/build/build.4165692077.tar
I1208 03:50:16.196319  136506 crio.go:315] Building image: /var/lib/minikube/build/build.4165692077
I1208 03:50:16.196443  136506 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-194253 /var/lib/minikube/build/build.4165692077 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1208 03:50:19.899336  136506 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-194253 /var/lib/minikube/build/build.4165692077 --cgroup-manager=cgroupfs: (3.702836636s)
I1208 03:50:19.899455  136506 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4165692077
I1208 03:50:19.912230  136506 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4165692077.tar
I1208 03:50:19.924853  136506 build_images.go:218] Built localhost/my-image:functional-194253 from /tmp/build.4165692077.tar
I1208 03:50:19.924911  136506 build_images.go:134] succeeded building to: functional-194253
I1208 03:50:19.924920  136506 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.927713868s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-194253
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image load --daemon kicbase/echo-server:functional-194253 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image load --daemon kicbase/echo-server:functional-194253 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-194253
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image load --daemon kicbase/echo-server:functional-194253 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image ls
I1208 03:49:45.981710  129900 detect.go:223] nested VM detected
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 service list -o json
functional_test.go:1504: Took "220.001624ms" to run "out/minikube-linux-amd64 -p functional-194253 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.115:31606
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image save kicbase/echo-server:functional-194253 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.115:31606
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image rm kicbase/echo-server:functional-194253 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (25.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-194253 /tmp/TestFunctionalparallelMountCmdany-port1333104725/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765165786817132315" to /tmp/TestFunctionalparallelMountCmdany-port1333104725/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765165786817132315" to /tmp/TestFunctionalparallelMountCmdany-port1333104725/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765165786817132315" to /tmp/TestFunctionalparallelMountCmdany-port1333104725/001/test-1765165786817132315
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-194253 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (193.760127ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 03:49:47.011257  129900 retry.go:31] will retry after 650.4327ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  8 03:49 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  8 03:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  8 03:49 test-1765165786817132315
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh cat /mount-9p/test-1765165786817132315
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-194253 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [9eaed5d3-b06d-4ada-b8e5-c9e0244d4f08] Pending
helpers_test.go:352: "busybox-mount" [9eaed5d3-b06d-4ada-b8e5-c9e0244d4f08] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [9eaed5d3-b06d-4ada-b8e5-c9e0244d4f08] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [9eaed5d3-b06d-4ada-b8e5-c9e0244d4f08] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 23.009693958s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-194253 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-194253 /tmp/TestFunctionalparallelMountCmdany-port1333104725/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (25.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-194253
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 image save --daemon kicbase/echo-server:functional-194253 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-194253
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-194253 /tmp/TestFunctionalparallelMountCmdspecific-port870943783/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-194253 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (192.575109ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 03:50:12.306791  129900 retry.go:31] will retry after 373.72573ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-194253 /tmp/TestFunctionalparallelMountCmdspecific-port870943783/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-194253 ssh "sudo umount -f /mount-9p": exit status 1 (164.986518ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-194253 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-194253 /tmp/TestFunctionalparallelMountCmdspecific-port870943783/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-194253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup393655314/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-194253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup393655314/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-194253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup393655314/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-194253 ssh "findmnt -T" /mount1: exit status 1 (187.785482ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 03:50:13.613227  129900 retry.go:31] will retry after 368.648617ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-194253 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-194253 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-194253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup393655314/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-194253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup393655314/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-194253 /tmp/TestFunctionalparallelMountCmdVerifyCleanup393655314/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-194253
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-194253
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-194253
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21409-125868/.minikube/files/etc/test/nested/copy/129900/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (74.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-940895 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-940895 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m14.091307334s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (74.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (29.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1208 03:51:47.558931  129900 config.go:182] Loaded profile config "functional-940895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-940895 --alsologtostderr -v=8
E1208 03:52:01.995876  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-940895 --alsologtostderr -v=8: (29.090099927s)
functional_test.go:678: soft start took 29.090498643s for "functional-940895" cluster.
I1208 03:52:16.649375  129900 config.go:182] Loaded profile config "functional-940895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (29.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-940895 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-940895 cache add registry.k8s.io/pause:latest: (1.036722084s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-940895 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach2168870601/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 cache add minikube-local-cache-test:functional-940895
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-940895 cache add minikube-local-cache-test:functional-940895: (1.926172802s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 cache delete minikube-local-cache-test:functional-940895
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-940895
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-940895 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (181.465251ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 kubectl -- --context functional-940895 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-940895 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (36.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-940895 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1208 03:52:29.709274  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-940895 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.397258313s)
functional_test.go:776: restart took 36.397399441s for "functional-940895" cluster.
I1208 03:53:00.637268  129900 config.go:182] Loaded profile config "functional-940895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (36.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-940895 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-940895 logs: (1.202651654s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2554562904/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-940895 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2554562904/001/logs.txt: (1.203513732s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-940895 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-940895
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-940895: exit status 115 (224.940962ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.191:32272 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-940895 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-940895 config get cpus: exit status 14 (77.656697ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-940895 config get cpus: exit status 14 (70.535165ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (13.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-940895 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-940895 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 138754: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (13.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-940895 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-940895 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (107.051057ms)

                                                
                                                
-- stdout --
	* [functional-940895] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 03:53:16.739833  138519 out.go:360] Setting OutFile to fd 1 ...
	I1208 03:53:16.739951  138519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:53:16.739960  138519 out.go:374] Setting ErrFile to fd 2...
	I1208 03:53:16.739964  138519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:53:16.740197  138519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	I1208 03:53:16.740645  138519 out.go:368] Setting JSON to false
	I1208 03:53:16.741642  138519 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2141,"bootTime":1765163856,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 03:53:16.741710  138519 start.go:143] virtualization: kvm guest
	I1208 03:53:16.743424  138519 out.go:179] * [functional-940895] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1208 03:53:16.744436  138519 out.go:179]   - MINIKUBE_LOCATION=21409
	I1208 03:53:16.744441  138519 notify.go:221] Checking for updates...
	I1208 03:53:16.746463  138519 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 03:53:16.747604  138519 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	I1208 03:53:16.748651  138519 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	I1208 03:53:16.749631  138519 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 03:53:16.750508  138519 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 03:53:16.751687  138519 config.go:182] Loaded profile config "functional-940895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 03:53:16.752142  138519 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 03:53:16.783715  138519 out.go:179] * Using the kvm2 driver based on existing profile
	I1208 03:53:16.784689  138519 start.go:309] selected driver: kvm2
	I1208 03:53:16.784705  138519 start.go:927] validating driver "kvm2" against &{Name:functional-940895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-940895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 03:53:16.784831  138519 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 03:53:16.786976  138519 out.go:203] 
	W1208 03:53:16.787925  138519 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1208 03:53:16.788811  138519 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-940895 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-940895 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-940895 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (124.298772ms)

                                                
                                                
-- stdout --
	* [functional-940895] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 03:53:09.801646  138275 out.go:360] Setting OutFile to fd 1 ...
	I1208 03:53:09.801824  138275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:53:09.801839  138275 out.go:374] Setting ErrFile to fd 2...
	I1208 03:53:09.801845  138275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 03:53:09.802124  138275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	I1208 03:53:09.802593  138275 out.go:368] Setting JSON to false
	I1208 03:53:09.803447  138275 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2134,"bootTime":1765163856,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 03:53:09.803516  138275 start.go:143] virtualization: kvm guest
	I1208 03:53:09.805401  138275 out.go:179] * [functional-940895] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1208 03:53:09.806551  138275 out.go:179]   - MINIKUBE_LOCATION=21409
	I1208 03:53:09.806551  138275 notify.go:221] Checking for updates...
	I1208 03:53:09.808953  138275 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 03:53:09.810033  138275 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	I1208 03:53:09.811009  138275 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	I1208 03:53:09.812084  138275 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 03:53:09.813021  138275 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 03:53:09.814395  138275 config.go:182] Loaded profile config "functional-940895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 03:53:09.814869  138275 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 03:53:09.850931  138275 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1208 03:53:09.851960  138275 start.go:309] selected driver: kvm2
	I1208 03:53:09.851975  138275 start.go:927] validating driver "kvm2" against &{Name:functional-940895 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21409/minikube-v1.37.0-1765151505-21409-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-940895 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.191 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1208 03:53:09.852074  138275 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 03:53:09.853824  138275 out.go:203] 
	W1208 03:53:09.854788  138275 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1208 03:53:09.855624  138275 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-940895 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-940895 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-9f67c86d4-4vrl9" [4153c1bc-6662-4b9d-ab1b-9d68396b0b49] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-9f67c86d4-4vrl9" [4153c1bc-6662-4b9d-ab1b-9d68396b0b49] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.0034677s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.191:30092
functional_test.go:1680: http://192.168.39.191:30092: success! body:
Request served by hello-node-connect-9f67c86d4-4vrl9

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.191:30092
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (10.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh -n functional-940895 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 cp functional-940895:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp161039055/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh -n functional-940895 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh -n functional-940895 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (30.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-940895 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-844cf969f6-6m2t6" [15d8820a-958c-4b84-b3f5-82fcd7c32a4b] Pending
helpers_test.go:352: "mysql-844cf969f6-6m2t6" [15d8820a-958c-4b84-b3f5-82fcd7c32a4b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-844cf969f6-6m2t6" [15d8820a-958c-4b84-b3f5-82fcd7c32a4b] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 27.253116682s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-940895 exec mysql-844cf969f6-6m2t6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-940895 exec mysql-844cf969f6-6m2t6 -- mysql -ppassword -e "show databases;": exit status 1 (128.047413ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1208 03:53:46.665107  129900 retry.go:31] will retry after 1.259850213s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-940895 exec mysql-844cf969f6-6m2t6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-940895 exec mysql-844cf969f6-6m2t6 -- mysql -ppassword -e "show databases;": exit status 1 (119.87668ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1208 03:53:48.045928  129900 retry.go:31] will retry after 1.317415516s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-940895 exec mysql-844cf969f6-6m2t6 -- mysql -ppassword -e "show databases;"
E1208 03:54:36.252931  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:54:36.259307  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:54:36.270685  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:54:36.292026  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:54:36.333401  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:54:36.414886  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:54:36.576466  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:54:36.898257  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:54:37.540567  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:54:38.822483  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:54:41.384392  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:54:46.506580  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:54:56.748831  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:55:17.230578  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:55:58.192364  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:57:01.995576  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 03:57:20.114163  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (30.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/129900/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "sudo cat /etc/test/nested/copy/129900/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/129900.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "sudo cat /etc/ssl/certs/129900.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/129900.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "sudo cat /usr/share/ca-certificates/129900.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1299002.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "sudo cat /etc/ssl/certs/1299002.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1299002.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "sudo cat /usr/share/ca-certificates/1299002.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-940895 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-940895 ssh "sudo systemctl is-active docker": exit status 1 (174.27879ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-940895 ssh "sudo systemctl is-active containerd": exit status 1 (164.501241ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-940895 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-940895 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-5758569b79-kjbzs" [1243389f-c888-4e1c-8617-67797cf33b1f] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-5758569b79-kjbzs" [1243389f-c888-4e1c-8617-67797cf33b1f] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.005004188s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (10.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "258.420156ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "63.292312ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "240.410881ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "59.456253ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (9.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-940895 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3797430188/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765165988644017407" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3797430188/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765165988644017407" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3797430188/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765165988644017407" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3797430188/001/test-1765165988644017407
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-940895 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (160.222487ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 03:53:08.804559  129900 retry.go:31] will retry after 534.538486ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  8 03:53 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  8 03:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  8 03:53 test-1765165988644017407
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh cat /mount-9p/test-1765165988644017407
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-940895 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [74d484a5-bba5-4887-b8f5-0219ec3bf338] Pending
helpers_test.go:352: "busybox-mount" [74d484a5-bba5-4887-b8f5-0219ec3bf338] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [74d484a5-bba5-4887-b8f5-0219ec3bf338] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [74d484a5-bba5-4887-b8f5-0219ec3bf338] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004769377s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-940895 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-940895 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3797430188/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (9.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-940895 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-940895
localhost/kicbase/echo-server:functional-940895
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-940895 image ls --format short --alsologtostderr:
I1208 03:53:21.337738  139008 out.go:360] Setting OutFile to fd 1 ...
I1208 03:53:21.338012  139008 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:53:21.338022  139008 out.go:374] Setting ErrFile to fd 2...
I1208 03:53:21.338026  139008 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:53:21.338292  139008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
I1208 03:53:21.338884  139008 config.go:182] Loaded profile config "functional-940895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 03:53:21.339005  139008 config.go:182] Loaded profile config "functional-940895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 03:53:21.341249  139008 ssh_runner.go:195] Run: systemctl --version
I1208 03:53:21.343782  139008 main.go:143] libmachine: domain functional-940895 has defined MAC address 52:54:00:55:1c:ae in network mk-functional-940895
I1208 03:53:21.344256  139008 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:1c:ae", ip: ""} in network mk-functional-940895: {Iface:virbr1 ExpiryTime:2025-12-08 04:50:48 +0000 UTC Type:0 Mac:52:54:00:55:1c:ae Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-940895 Clientid:01:52:54:00:55:1c:ae}
I1208 03:53:21.344298  139008 main.go:143] libmachine: domain functional-940895 has defined IP address 192.168.39.191 and MAC address 52:54:00:55:1c:ae in network mk-functional-940895
I1208 03:53:21.344450  139008 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/functional-940895/id_rsa Username:docker}
I1208 03:53:21.431109  139008 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-940895 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-940895  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/my-image                      │ functional-940895  │ 2ab434351d33c │ 1.47MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/minikube-local-cache-test     │ functional-940895  │ 2a4865958a687 │ 3.33kB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-940895 image ls --format table --alsologtostderr:
I1208 03:53:26.652180  139117 out.go:360] Setting OutFile to fd 1 ...
I1208 03:53:26.652444  139117 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:53:26.652460  139117 out.go:374] Setting ErrFile to fd 2...
I1208 03:53:26.652467  139117 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:53:26.653059  139117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
I1208 03:53:26.654535  139117 config.go:182] Loaded profile config "functional-940895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 03:53:26.654708  139117 config.go:182] Loaded profile config "functional-940895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 03:53:26.657347  139117 ssh_runner.go:195] Run: systemctl --version
I1208 03:53:26.659987  139117 main.go:143] libmachine: domain functional-940895 has defined MAC address 52:54:00:55:1c:ae in network mk-functional-940895
I1208 03:53:26.660451  139117 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:1c:ae", ip: ""} in network mk-functional-940895: {Iface:virbr1 ExpiryTime:2025-12-08 04:50:48 +0000 UTC Type:0 Mac:52:54:00:55:1c:ae Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-940895 Clientid:01:52:54:00:55:1c:ae}
I1208 03:53:26.660495  139117 main.go:143] libmachine: domain functional-940895 has defined IP address 192.168.39.191 and MAC address 52:54:00:55:1c:ae in network mk-functional-940895
I1208 03:53:26.660649  139117 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/functional-940895/id_rsa Username:docker}
I1208 03:53:26.762749  139117 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-940895 image ls --format json --alsologtostderr:
[{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d35
69b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"d0f5aca8052a44807fe3828f583bb81eb4714ce72c8369f7cdda1a2818023af6","repoDigests":["docker.io/library/643df194d7189b5c70fafff1a6efd9ee20dada97f3a3dfcfc79070b257f37b80-tmp@sha256:b3f232728d5ce2f82cf4d2cfdd2a8a874666ce88bb10dc6d8f9
605f0e45beee9"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"2ab434351d33c5caf37d404984eb8a438853a8ec3ebc4b805dcb5176fb3844be","repoDigests":["localhost/my-image@sha256:a164b943162d37f4fb9462caabd31b0282db8562ee4e4124d6f0357fbab76605"],"repoTags":["localhost/my-image:functional-940895"],"size":"1468600"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"8a4ded35
a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"2a4865958a687c30c2e879bf870a
59e4d537b068b0a1b1a35810d30f71098580","repoDigests":["localhost/minikube-local-cache-test@sha256:796b45a805cdb32b4b7455fbc3e0ee088f16f285d93c1098202fc6399b10eca9"],"repoTags":["localhost/minikube-local-cache-test:functional-940895"],"size":"3330"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76
872535"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/ech
o-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-940895"],"size":"4945146"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-940895 image ls --format json --alsologtostderr:
I1208 03:53:26.409594  139107 out.go:360] Setting OutFile to fd 1 ...
I1208 03:53:26.409697  139107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:53:26.409703  139107 out.go:374] Setting ErrFile to fd 2...
I1208 03:53:26.409709  139107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:53:26.409955  139107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
I1208 03:53:26.410582  139107 config.go:182] Loaded profile config "functional-940895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 03:53:26.410676  139107 config.go:182] Loaded profile config "functional-940895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 03:53:26.412822  139107 ssh_runner.go:195] Run: systemctl --version
I1208 03:53:26.415044  139107 main.go:143] libmachine: domain functional-940895 has defined MAC address 52:54:00:55:1c:ae in network mk-functional-940895
I1208 03:53:26.415424  139107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:1c:ae", ip: ""} in network mk-functional-940895: {Iface:virbr1 ExpiryTime:2025-12-08 04:50:48 +0000 UTC Type:0 Mac:52:54:00:55:1c:ae Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-940895 Clientid:01:52:54:00:55:1c:ae}
I1208 03:53:26.415449  139107 main.go:143] libmachine: domain functional-940895 has defined IP address 192.168.39.191 and MAC address 52:54:00:55:1c:ae in network mk-functional-940895
I1208 03:53:26.415576  139107 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/functional-940895/id_rsa Username:docker}
I1208 03:53:26.518019  139107 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-940895 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 2a4865958a687c30c2e879bf870a59e4d537b068b0a1b1a35810d30f71098580
repoDigests:
- localhost/minikube-local-cache-test@sha256:796b45a805cdb32b4b7455fbc3e0ee088f16f285d93c1098202fc6399b10eca9
repoTags:
- localhost/minikube-local-cache-test:functional-940895
size: "3330"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-940895
size: "4945146"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-940895 image ls --format yaml --alsologtostderr:
I1208 03:53:21.557285  139018 out.go:360] Setting OutFile to fd 1 ...
I1208 03:53:21.557564  139018 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:53:21.557574  139018 out.go:374] Setting ErrFile to fd 2...
I1208 03:53:21.557578  139018 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:53:21.557767  139018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
I1208 03:53:21.558347  139018 config.go:182] Loaded profile config "functional-940895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 03:53:21.558438  139018 config.go:182] Loaded profile config "functional-940895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 03:53:21.560704  139018 ssh_runner.go:195] Run: systemctl --version
I1208 03:53:21.562999  139018 main.go:143] libmachine: domain functional-940895 has defined MAC address 52:54:00:55:1c:ae in network mk-functional-940895
I1208 03:53:21.563526  139018 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:1c:ae", ip: ""} in network mk-functional-940895: {Iface:virbr1 ExpiryTime:2025-12-08 04:50:48 +0000 UTC Type:0 Mac:52:54:00:55:1c:ae Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-940895 Clientid:01:52:54:00:55:1c:ae}
I1208 03:53:21.563560  139018 main.go:143] libmachine: domain functional-940895 has defined IP address 192.168.39.191 and MAC address 52:54:00:55:1c:ae in network mk-functional-940895
I1208 03:53:21.563745  139018 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/functional-940895/id_rsa Username:docker}
I1208 03:53:21.668742  139018 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-940895 ssh pgrep buildkitd: exit status 1 (178.20559ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image build -t localhost/my-image:functional-940895 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-940895 image build -t localhost/my-image:functional-940895 testdata/build --alsologtostderr: (4.166232586s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-940895 image build -t localhost/my-image:functional-940895 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d0f5aca8052
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-940895
--> 2ab434351d3
Successfully tagged localhost/my-image:functional-940895
2ab434351d33c5caf37d404984eb8a438853a8ec3ebc4b805dcb5176fb3844be
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-940895 image build -t localhost/my-image:functional-940895 testdata/build --alsologtostderr:
I1208 03:53:21.961868  139040 out.go:360] Setting OutFile to fd 1 ...
I1208 03:53:21.962019  139040 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:53:21.962029  139040 out.go:374] Setting ErrFile to fd 2...
I1208 03:53:21.962033  139040 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1208 03:53:21.962259  139040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
I1208 03:53:21.962846  139040 config.go:182] Loaded profile config "functional-940895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 03:53:21.963583  139040 config.go:182] Loaded profile config "functional-940895": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1208 03:53:21.965854  139040 ssh_runner.go:195] Run: systemctl --version
I1208 03:53:21.968192  139040 main.go:143] libmachine: domain functional-940895 has defined MAC address 52:54:00:55:1c:ae in network mk-functional-940895
I1208 03:53:21.968600  139040 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:55:1c:ae", ip: ""} in network mk-functional-940895: {Iface:virbr1 ExpiryTime:2025-12-08 04:50:48 +0000 UTC Type:0 Mac:52:54:00:55:1c:ae Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:functional-940895 Clientid:01:52:54:00:55:1c:ae}
I1208 03:53:21.968638  139040 main.go:143] libmachine: domain functional-940895 has defined IP address 192.168.39.191 and MAC address 52:54:00:55:1c:ae in network mk-functional-940895
I1208 03:53:21.968793  139040 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/functional-940895/id_rsa Username:docker}
I1208 03:53:22.055692  139040 build_images.go:162] Building image from path: /tmp/build.955719928.tar
I1208 03:53:22.055780  139040 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1208 03:53:22.068590  139040 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.955719928.tar
I1208 03:53:22.073797  139040 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.955719928.tar: stat -c "%s %y" /var/lib/minikube/build/build.955719928.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.955719928.tar': No such file or directory
I1208 03:53:22.073833  139040 ssh_runner.go:362] scp /tmp/build.955719928.tar --> /var/lib/minikube/build/build.955719928.tar (3072 bytes)
I1208 03:53:22.122497  139040 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.955719928
I1208 03:53:22.138702  139040 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.955719928 -xf /var/lib/minikube/build/build.955719928.tar
I1208 03:53:22.153529  139040 crio.go:315] Building image: /var/lib/minikube/build/build.955719928
I1208 03:53:22.153588  139040 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-940895 /var/lib/minikube/build/build.955719928 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1208 03:53:26.013735  139040 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-940895 /var/lib/minikube/build/build.955719928 --cgroup-manager=cgroupfs: (3.860119606s)
I1208 03:53:26.013835  139040 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.955719928
I1208 03:53:26.034591  139040 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.955719928.tar
I1208 03:53:26.054408  139040 build_images.go:218] Built localhost/my-image:functional-940895 from /tmp/build.955719928.tar
I1208 03:53:26.054455  139040 build_images.go:134] succeeded building to: functional-940895
I1208 03:53:26.054460  139040 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (4.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-940895
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image load --daemon kicbase/echo-server:functional-940895 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image load --daemon kicbase/echo-server:functional-940895 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-940895
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image load --daemon kicbase/echo-server:functional-940895 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image save kicbase/echo-server:functional-940895 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image rm kicbase/echo-server:functional-940895 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.66s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-940895
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 image save --daemon kicbase/echo-server:functional-940895 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-940895
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.73s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-940895 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3127299969/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-940895 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (198.010127ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 03:53:17.895941  129900 retry.go:31] will retry after 683.848766ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-940895 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3127299969/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-940895 ssh "sudo umount -f /mount-9p": exit status 1 (210.506047ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-940895 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-940895 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3127299969/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.73s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 service list -o json
functional_test.go:1504: Took "274.040803ms" to run "out/minikube-linux-amd64 -p functional-940895 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.191:31844
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.191:31844
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-940895 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3995165135/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-940895 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3995165135/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-940895 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3995165135/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-940895 ssh "findmnt -T" /mount1: exit status 1 (176.591026ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1208 03:53:19.607614  129900 retry.go:31] will retry after 488.589523ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-940895 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-940895 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-940895 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3995165135/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-940895 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3995165135/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-940895 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo3995165135/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-940895
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-940895
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-940895
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (193.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1208 03:59:36.253231  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:00:03.956365  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:02:01.995488  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-643000 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m12.536070898s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (193.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-643000 kubectl -- rollout status deployment/busybox: (5.197338341s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- exec busybox-7b57f96db7-9df2r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- exec busybox-7b57f96db7-c9tvd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- exec busybox-7b57f96db7-klf6b -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- exec busybox-7b57f96db7-9df2r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- exec busybox-7b57f96db7-c9tvd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- exec busybox-7b57f96db7-klf6b -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- exec busybox-7b57f96db7-9df2r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- exec busybox-7b57f96db7-c9tvd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- exec busybox-7b57f96db7-klf6b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- exec busybox-7b57f96db7-9df2r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- exec busybox-7b57f96db7-9df2r -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- exec busybox-7b57f96db7-c9tvd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- exec busybox-7b57f96db7-c9tvd -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- exec busybox-7b57f96db7-klf6b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 kubectl -- exec busybox-7b57f96db7-klf6b -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (41.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 node add --alsologtostderr -v 5
E1208 04:03:07.333751  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:03:07.340178  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:03:07.351652  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:03:07.373055  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:03:07.414471  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:03:07.495982  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:03:07.657620  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:03:07.979355  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:03:08.621301  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:03:09.903097  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:03:12.465098  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:03:17.586490  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:03:25.071119  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:03:27.828195  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-643000 node add --alsologtostderr -v 5: (40.881564587s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (41.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-643000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp testdata/cp-test.txt ha-643000:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp ha-643000:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile855722625/001/cp-test_ha-643000.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp ha-643000:/home/docker/cp-test.txt ha-643000-m02:/home/docker/cp-test_ha-643000_ha-643000-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m02 "sudo cat /home/docker/cp-test_ha-643000_ha-643000-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp ha-643000:/home/docker/cp-test.txt ha-643000-m03:/home/docker/cp-test_ha-643000_ha-643000-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m03 "sudo cat /home/docker/cp-test_ha-643000_ha-643000-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp ha-643000:/home/docker/cp-test.txt ha-643000-m04:/home/docker/cp-test_ha-643000_ha-643000-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m04 "sudo cat /home/docker/cp-test_ha-643000_ha-643000-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp testdata/cp-test.txt ha-643000-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp ha-643000-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile855722625/001/cp-test_ha-643000-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp ha-643000-m02:/home/docker/cp-test.txt ha-643000:/home/docker/cp-test_ha-643000-m02_ha-643000.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000 "sudo cat /home/docker/cp-test_ha-643000-m02_ha-643000.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp ha-643000-m02:/home/docker/cp-test.txt ha-643000-m03:/home/docker/cp-test_ha-643000-m02_ha-643000-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m03 "sudo cat /home/docker/cp-test_ha-643000-m02_ha-643000-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp ha-643000-m02:/home/docker/cp-test.txt ha-643000-m04:/home/docker/cp-test_ha-643000-m02_ha-643000-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m04 "sudo cat /home/docker/cp-test_ha-643000-m02_ha-643000-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp testdata/cp-test.txt ha-643000-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp ha-643000-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile855722625/001/cp-test_ha-643000-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp ha-643000-m03:/home/docker/cp-test.txt ha-643000:/home/docker/cp-test_ha-643000-m03_ha-643000.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000 "sudo cat /home/docker/cp-test_ha-643000-m03_ha-643000.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp ha-643000-m03:/home/docker/cp-test.txt ha-643000-m02:/home/docker/cp-test_ha-643000-m03_ha-643000-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m02 "sudo cat /home/docker/cp-test_ha-643000-m03_ha-643000-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp ha-643000-m03:/home/docker/cp-test.txt ha-643000-m04:/home/docker/cp-test_ha-643000-m03_ha-643000-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m04 "sudo cat /home/docker/cp-test_ha-643000-m03_ha-643000-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp testdata/cp-test.txt ha-643000-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp ha-643000-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile855722625/001/cp-test_ha-643000-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp ha-643000-m04:/home/docker/cp-test.txt ha-643000:/home/docker/cp-test_ha-643000-m04_ha-643000.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000 "sudo cat /home/docker/cp-test_ha-643000-m04_ha-643000.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp ha-643000-m04:/home/docker/cp-test.txt ha-643000-m02:/home/docker/cp-test_ha-643000-m04_ha-643000-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m02 "sudo cat /home/docker/cp-test_ha-643000-m04_ha-643000-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 cp ha-643000-m04:/home/docker/cp-test.txt ha-643000-m03:/home/docker/cp-test_ha-643000-m04_ha-643000-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 ssh -n ha-643000-m03 "sudo cat /home/docker/cp-test_ha-643000-m04_ha-643000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (86.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 node stop m02 --alsologtostderr -v 5
E1208 04:03:48.309987  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:04:29.272850  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:04:36.255943  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-643000 node stop m02 --alsologtostderr -v 5: (1m26.335562778s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-643000 status --alsologtostderr -v 5: exit status 7 (479.326734ms)

                                                
                                                
-- stdout --
	ha-643000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-643000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-643000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-643000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 04:05:11.567726  143397 out.go:360] Setting OutFile to fd 1 ...
	I1208 04:05:11.568017  143397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:05:11.568029  143397 out.go:374] Setting ErrFile to fd 2...
	I1208 04:05:11.568036  143397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:05:11.568237  143397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	I1208 04:05:11.568458  143397 out.go:368] Setting JSON to false
	I1208 04:05:11.568496  143397 mustload.go:66] Loading cluster: ha-643000
	I1208 04:05:11.568617  143397 notify.go:221] Checking for updates...
	I1208 04:05:11.568937  143397 config.go:182] Loaded profile config "ha-643000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 04:05:11.568958  143397 status.go:174] checking status of ha-643000 ...
	I1208 04:05:11.571685  143397 status.go:371] ha-643000 host status = "Running" (err=<nil>)
	I1208 04:05:11.571702  143397 host.go:66] Checking if "ha-643000" exists ...
	I1208 04:05:11.574348  143397 main.go:143] libmachine: domain ha-643000 has defined MAC address 52:54:00:42:49:eb in network mk-ha-643000
	I1208 04:05:11.574838  143397 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:49:eb", ip: ""} in network mk-ha-643000: {Iface:virbr1 ExpiryTime:2025-12-08 04:59:45 +0000 UTC Type:0 Mac:52:54:00:42:49:eb Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-643000 Clientid:01:52:54:00:42:49:eb}
	I1208 04:05:11.574890  143397 main.go:143] libmachine: domain ha-643000 has defined IP address 192.168.39.216 and MAC address 52:54:00:42:49:eb in network mk-ha-643000
	I1208 04:05:11.575045  143397 host.go:66] Checking if "ha-643000" exists ...
	I1208 04:05:11.575245  143397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 04:05:11.577359  143397 main.go:143] libmachine: domain ha-643000 has defined MAC address 52:54:00:42:49:eb in network mk-ha-643000
	I1208 04:05:11.577880  143397 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:42:49:eb", ip: ""} in network mk-ha-643000: {Iface:virbr1 ExpiryTime:2025-12-08 04:59:45 +0000 UTC Type:0 Mac:52:54:00:42:49:eb Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-643000 Clientid:01:52:54:00:42:49:eb}
	I1208 04:05:11.577920  143397 main.go:143] libmachine: domain ha-643000 has defined IP address 192.168.39.216 and MAC address 52:54:00:42:49:eb in network mk-ha-643000
	I1208 04:05:11.578102  143397 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/ha-643000/id_rsa Username:docker}
	I1208 04:05:11.658411  143397 ssh_runner.go:195] Run: systemctl --version
	I1208 04:05:11.664382  143397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 04:05:11.680870  143397 kubeconfig.go:125] found "ha-643000" server: "https://192.168.39.254:8443"
	I1208 04:05:11.680919  143397 api_server.go:166] Checking apiserver status ...
	I1208 04:05:11.680969  143397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 04:05:11.700015  143397 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1372/cgroup
	W1208 04:05:11.711197  143397 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1372/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1208 04:05:11.711264  143397 ssh_runner.go:195] Run: ls
	I1208 04:05:11.719126  143397 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1208 04:05:11.725635  143397 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1208 04:05:11.725663  143397 status.go:463] ha-643000 apiserver status = Running (err=<nil>)
	I1208 04:05:11.725676  143397 status.go:176] ha-643000 status: &{Name:ha-643000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 04:05:11.725713  143397 status.go:174] checking status of ha-643000-m02 ...
	I1208 04:05:11.727375  143397 status.go:371] ha-643000-m02 host status = "Stopped" (err=<nil>)
	I1208 04:05:11.727392  143397 status.go:384] host is not running, skipping remaining checks
	I1208 04:05:11.727399  143397 status.go:176] ha-643000-m02 status: &{Name:ha-643000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 04:05:11.727420  143397 status.go:174] checking status of ha-643000-m03 ...
	I1208 04:05:11.728628  143397 status.go:371] ha-643000-m03 host status = "Running" (err=<nil>)
	I1208 04:05:11.728644  143397 host.go:66] Checking if "ha-643000-m03" exists ...
	I1208 04:05:11.731194  143397 main.go:143] libmachine: domain ha-643000-m03 has defined MAC address 52:54:00:13:db:f7 in network mk-ha-643000
	I1208 04:05:11.731625  143397 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:13:db:f7", ip: ""} in network mk-ha-643000: {Iface:virbr1 ExpiryTime:2025-12-08 05:01:39 +0000 UTC Type:0 Mac:52:54:00:13:db:f7 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-643000-m03 Clientid:01:52:54:00:13:db:f7}
	I1208 04:05:11.731659  143397 main.go:143] libmachine: domain ha-643000-m03 has defined IP address 192.168.39.107 and MAC address 52:54:00:13:db:f7 in network mk-ha-643000
	I1208 04:05:11.731819  143397 host.go:66] Checking if "ha-643000-m03" exists ...
	I1208 04:05:11.732080  143397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 04:05:11.734418  143397 main.go:143] libmachine: domain ha-643000-m03 has defined MAC address 52:54:00:13:db:f7 in network mk-ha-643000
	I1208 04:05:11.734820  143397 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:13:db:f7", ip: ""} in network mk-ha-643000: {Iface:virbr1 ExpiryTime:2025-12-08 05:01:39 +0000 UTC Type:0 Mac:52:54:00:13:db:f7 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-643000-m03 Clientid:01:52:54:00:13:db:f7}
	I1208 04:05:11.734856  143397 main.go:143] libmachine: domain ha-643000-m03 has defined IP address 192.168.39.107 and MAC address 52:54:00:13:db:f7 in network mk-ha-643000
	I1208 04:05:11.735038  143397 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/ha-643000-m03/id_rsa Username:docker}
	I1208 04:05:11.819718  143397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 04:05:11.837502  143397 kubeconfig.go:125] found "ha-643000" server: "https://192.168.39.254:8443"
	I1208 04:05:11.837534  143397 api_server.go:166] Checking apiserver status ...
	I1208 04:05:11.837572  143397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 04:05:11.858456  143397 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1796/cgroup
	W1208 04:05:11.869988  143397 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1796/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1208 04:05:11.870057  143397 ssh_runner.go:195] Run: ls
	I1208 04:05:11.875035  143397 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1208 04:05:11.879681  143397 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1208 04:05:11.879704  143397 status.go:463] ha-643000-m03 apiserver status = Running (err=<nil>)
	I1208 04:05:11.879713  143397 status.go:176] ha-643000-m03 status: &{Name:ha-643000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 04:05:11.879733  143397 status.go:174] checking status of ha-643000-m04 ...
	I1208 04:05:11.881402  143397 status.go:371] ha-643000-m04 host status = "Running" (err=<nil>)
	I1208 04:05:11.881420  143397 host.go:66] Checking if "ha-643000-m04" exists ...
	I1208 04:05:11.884353  143397 main.go:143] libmachine: domain ha-643000-m04 has defined MAC address 52:54:00:1a:1c:a1 in network mk-ha-643000
	I1208 04:05:11.884774  143397 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1a:1c:a1", ip: ""} in network mk-ha-643000: {Iface:virbr1 ExpiryTime:2025-12-08 05:03:07 +0000 UTC Type:0 Mac:52:54:00:1a:1c:a1 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-643000-m04 Clientid:01:52:54:00:1a:1c:a1}
	I1208 04:05:11.884801  143397 main.go:143] libmachine: domain ha-643000-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:1a:1c:a1 in network mk-ha-643000
	I1208 04:05:11.884936  143397 host.go:66] Checking if "ha-643000-m04" exists ...
	I1208 04:05:11.885141  143397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 04:05:11.887045  143397 main.go:143] libmachine: domain ha-643000-m04 has defined MAC address 52:54:00:1a:1c:a1 in network mk-ha-643000
	I1208 04:05:11.887380  143397 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1a:1c:a1", ip: ""} in network mk-ha-643000: {Iface:virbr1 ExpiryTime:2025-12-08 05:03:07 +0000 UTC Type:0 Mac:52:54:00:1a:1c:a1 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-643000-m04 Clientid:01:52:54:00:1a:1c:a1}
	I1208 04:05:11.887408  143397 main.go:143] libmachine: domain ha-643000-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:1a:1c:a1 in network mk-ha-643000
	I1208 04:05:11.887527  143397 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/ha-643000-m04/id_rsa Username:docker}
	I1208 04:05:11.962421  143397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 04:05:11.978229  143397 status.go:176] ha-643000-m04 status: &{Name:ha-643000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (86.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-643000 node start m02 --alsologtostderr -v 5: (33.445796882s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (375.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 stop --alsologtostderr -v 5
E1208 04:05:51.194739  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:07:01.997348  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:08:07.333479  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:08:35.039077  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:09:36.252656  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-643000 stop --alsologtostderr -v 5: (4m27.310132555s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 start --wait true --alsologtostderr -v 5
E1208 04:10:59.319107  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:12:01.995892  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-643000 start --wait true --alsologtostderr -v 5: (1m48.259521267s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (375.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-643000 node delete m03 --alsologtostderr -v 5: (17.169441763s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (258.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 stop --alsologtostderr -v 5
E1208 04:13:07.334019  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:14:36.255940  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-643000 stop --alsologtostderr -v 5: (4m18.646819977s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-643000 status --alsologtostderr -v 5: exit status 7 (68.119846ms)

                                                
                                                
-- stdout --
	ha-643000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-643000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-643000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 04:16:40.219828  146695 out.go:360] Setting OutFile to fd 1 ...
	I1208 04:16:40.219949  146695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:16:40.219955  146695 out.go:374] Setting ErrFile to fd 2...
	I1208 04:16:40.219961  146695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:16:40.220171  146695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	I1208 04:16:40.220355  146695 out.go:368] Setting JSON to false
	I1208 04:16:40.220382  146695 mustload.go:66] Loading cluster: ha-643000
	I1208 04:16:40.220495  146695 notify.go:221] Checking for updates...
	I1208 04:16:40.220712  146695 config.go:182] Loaded profile config "ha-643000": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 04:16:40.220736  146695 status.go:174] checking status of ha-643000 ...
	I1208 04:16:40.222937  146695 status.go:371] ha-643000 host status = "Stopped" (err=<nil>)
	I1208 04:16:40.222953  146695 status.go:384] host is not running, skipping remaining checks
	I1208 04:16:40.222958  146695 status.go:176] ha-643000 status: &{Name:ha-643000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 04:16:40.222975  146695 status.go:174] checking status of ha-643000-m02 ...
	I1208 04:16:40.224328  146695 status.go:371] ha-643000-m02 host status = "Stopped" (err=<nil>)
	I1208 04:16:40.224342  146695 status.go:384] host is not running, skipping remaining checks
	I1208 04:16:40.224346  146695 status.go:176] ha-643000-m02 status: &{Name:ha-643000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 04:16:40.224357  146695 status.go:174] checking status of ha-643000-m04 ...
	I1208 04:16:40.225512  146695 status.go:371] ha-643000-m04 host status = "Stopped" (err=<nil>)
	I1208 04:16:40.225526  146695 status.go:384] host is not running, skipping remaining checks
	I1208 04:16:40.225530  146695 status.go:176] ha-643000-m04 status: &{Name:ha-643000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (258.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (104.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1208 04:17:01.996091  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:18:07.334100  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-643000 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m43.936402601s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (104.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 node add --control-plane --alsologtostderr -v 5
E1208 04:19:30.400825  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:19:36.253237  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-643000 node add --control-plane --alsologtostderr -v 5: (1m12.315730731s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-643000 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.66s)

                                                
                                    
x
+
TestJSONOutput/start/Command (72.54s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-925251 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1208 04:20:05.074826  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-925251 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m12.542648809s)
--- PASS: TestJSONOutput/start/Command (72.54s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-925251 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-925251 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-925251 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-925251 --output=json --user=testUser: (6.833270223s)
--- PASS: TestJSONOutput/stop/Command (6.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-124108 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-124108 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (74.578543ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"117e26d6-cd32-4205-b5ce-ce8ea51524e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-124108] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"70de1d40-c9ef-4aef-a002-971f9ee0ea3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21409"}}
	{"specversion":"1.0","id":"8e552d50-03f1-4682-877b-3bfbccebff3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2df29d52-a2e3-4c25-907a-ac8a899c203f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig"}}
	{"specversion":"1.0","id":"218cbe7c-7a30-41fe-8161-4dc9c4f9f17b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube"}}
	{"specversion":"1.0","id":"cc261fb2-d556-41e1-a615-8c8704dc650e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"75a5fca0-b4e6-4a9a-b065-4033c0aee791","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"900b496b-bd60-4097-9730-ced4401e823b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-124108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-124108
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (74.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-684949 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-684949 --driver=kvm2  --container-runtime=crio: (35.525007919s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-687113 --driver=kvm2  --container-runtime=crio
E1208 04:22:01.999473  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-687113 --driver=kvm2  --container-runtime=crio: (36.066594518s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-684949
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-687113
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-687113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-687113
helpers_test.go:175: Cleaning up "first-684949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-684949
--- PASS: TestMinikubeProfile (74.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-638502 --memory=3072 --mount-string /tmp/TestMountStartserial2376064596/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-638502 --memory=3072 --mount-string /tmp/TestMountStartserial2376064596/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.655812975s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-638502 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-638502 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-652711 --memory=3072 --mount-string /tmp/TestMountStartserial2376064596/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-652711 --memory=3072 --mount-string /tmp/TestMountStartserial2376064596/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (19.181272802s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-652711 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-652711 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-638502 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-652711 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-652711 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-652711
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-652711: (1.211794949s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.33s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-652711
E1208 04:23:07.333762  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-652711: (17.326854875s)
--- PASS: TestMountStart/serial/RestartStopped (18.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-652711 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-652711 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (94.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-012284 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1208 04:24:36.252820  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-012284 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m34.287865928s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (94.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-012284 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-012284 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-012284 -- rollout status deployment/busybox: (4.245486653s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-012284 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-012284 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-012284 -- exec busybox-7b57f96db7-7wgc5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-012284 -- exec busybox-7b57f96db7-vn7wl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-012284 -- exec busybox-7b57f96db7-7wgc5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-012284 -- exec busybox-7b57f96db7-vn7wl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-012284 -- exec busybox-7b57f96db7-7wgc5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-012284 -- exec busybox-7b57f96db7-vn7wl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.83s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-012284 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-012284 -- exec busybox-7b57f96db7-7wgc5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-012284 -- exec busybox-7b57f96db7-7wgc5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-012284 -- exec busybox-7b57f96db7-vn7wl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-012284 -- exec busybox-7b57f96db7-vn7wl -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-012284 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-012284 -v=5 --alsologtostderr: (41.968115546s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.40s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-012284 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 cp testdata/cp-test.txt multinode-012284:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 cp multinode-012284:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3538484596/001/cp-test_multinode-012284.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 cp multinode-012284:/home/docker/cp-test.txt multinode-012284-m02:/home/docker/cp-test_multinode-012284_multinode-012284-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284-m02 "sudo cat /home/docker/cp-test_multinode-012284_multinode-012284-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 cp multinode-012284:/home/docker/cp-test.txt multinode-012284-m03:/home/docker/cp-test_multinode-012284_multinode-012284-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284-m03 "sudo cat /home/docker/cp-test_multinode-012284_multinode-012284-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 cp testdata/cp-test.txt multinode-012284-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 cp multinode-012284-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3538484596/001/cp-test_multinode-012284-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 cp multinode-012284-m02:/home/docker/cp-test.txt multinode-012284:/home/docker/cp-test_multinode-012284-m02_multinode-012284.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284 "sudo cat /home/docker/cp-test_multinode-012284-m02_multinode-012284.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 cp multinode-012284-m02:/home/docker/cp-test.txt multinode-012284-m03:/home/docker/cp-test_multinode-012284-m02_multinode-012284-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284-m03 "sudo cat /home/docker/cp-test_multinode-012284-m02_multinode-012284-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 cp testdata/cp-test.txt multinode-012284-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 cp multinode-012284-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3538484596/001/cp-test_multinode-012284-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 cp multinode-012284-m03:/home/docker/cp-test.txt multinode-012284:/home/docker/cp-test_multinode-012284-m03_multinode-012284.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284 "sudo cat /home/docker/cp-test_multinode-012284-m03_multinode-012284.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 cp multinode-012284-m03:/home/docker/cp-test.txt multinode-012284-m02:/home/docker/cp-test_multinode-012284-m03_multinode-012284-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 ssh -n multinode-012284-m02 "sudo cat /home/docker/cp-test_multinode-012284-m03_multinode-012284-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-012284 node stop m03: (1.542513339s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-012284 status: exit status 7 (316.325392ms)

                                                
                                                
-- stdout --
	multinode-012284
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-012284-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-012284-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-012284 status --alsologtostderr: exit status 7 (324.327676ms)

                                                
                                                
-- stdout --
	multinode-012284
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-012284-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-012284-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 04:25:52.119825  152630 out.go:360] Setting OutFile to fd 1 ...
	I1208 04:25:52.120126  152630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:25:52.120136  152630 out.go:374] Setting ErrFile to fd 2...
	I1208 04:25:52.120142  152630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:25:52.120331  152630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	I1208 04:25:52.120502  152630 out.go:368] Setting JSON to false
	I1208 04:25:52.120528  152630 mustload.go:66] Loading cluster: multinode-012284
	I1208 04:25:52.120617  152630 notify.go:221] Checking for updates...
	I1208 04:25:52.120888  152630 config.go:182] Loaded profile config "multinode-012284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 04:25:52.120925  152630 status.go:174] checking status of multinode-012284 ...
	I1208 04:25:52.122972  152630 status.go:371] multinode-012284 host status = "Running" (err=<nil>)
	I1208 04:25:52.122991  152630 host.go:66] Checking if "multinode-012284" exists ...
	I1208 04:25:52.125481  152630 main.go:143] libmachine: domain multinode-012284 has defined MAC address 52:54:00:05:f4:99 in network mk-multinode-012284
	I1208 04:25:52.125932  152630 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:f4:99", ip: ""} in network mk-multinode-012284: {Iface:virbr1 ExpiryTime:2025-12-08 05:23:34 +0000 UTC Type:0 Mac:52:54:00:05:f4:99 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:multinode-012284 Clientid:01:52:54:00:05:f4:99}
	I1208 04:25:52.125972  152630 main.go:143] libmachine: domain multinode-012284 has defined IP address 192.168.39.98 and MAC address 52:54:00:05:f4:99 in network mk-multinode-012284
	I1208 04:25:52.126104  152630 host.go:66] Checking if "multinode-012284" exists ...
	I1208 04:25:52.126322  152630 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 04:25:52.128358  152630 main.go:143] libmachine: domain multinode-012284 has defined MAC address 52:54:00:05:f4:99 in network mk-multinode-012284
	I1208 04:25:52.128708  152630 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:05:f4:99", ip: ""} in network mk-multinode-012284: {Iface:virbr1 ExpiryTime:2025-12-08 05:23:34 +0000 UTC Type:0 Mac:52:54:00:05:f4:99 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:multinode-012284 Clientid:01:52:54:00:05:f4:99}
	I1208 04:25:52.128737  152630 main.go:143] libmachine: domain multinode-012284 has defined IP address 192.168.39.98 and MAC address 52:54:00:05:f4:99 in network mk-multinode-012284
	I1208 04:25:52.128884  152630 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/multinode-012284/id_rsa Username:docker}
	I1208 04:25:52.211734  152630 ssh_runner.go:195] Run: systemctl --version
	I1208 04:25:52.219030  152630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 04:25:52.235572  152630 kubeconfig.go:125] found "multinode-012284" server: "https://192.168.39.98:8443"
	I1208 04:25:52.235613  152630 api_server.go:166] Checking apiserver status ...
	I1208 04:25:52.235651  152630 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1208 04:25:52.254459  152630 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1372/cgroup
	W1208 04:25:52.265980  152630 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1372/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1208 04:25:52.266056  152630 ssh_runner.go:195] Run: ls
	I1208 04:25:52.270926  152630 api_server.go:253] Checking apiserver healthz at https://192.168.39.98:8443/healthz ...
	I1208 04:25:52.275648  152630 api_server.go:279] https://192.168.39.98:8443/healthz returned 200:
	ok
	I1208 04:25:52.275676  152630 status.go:463] multinode-012284 apiserver status = Running (err=<nil>)
	I1208 04:25:52.275689  152630 status.go:176] multinode-012284 status: &{Name:multinode-012284 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 04:25:52.275732  152630 status.go:174] checking status of multinode-012284-m02 ...
	I1208 04:25:52.277292  152630 status.go:371] multinode-012284-m02 host status = "Running" (err=<nil>)
	I1208 04:25:52.277310  152630 host.go:66] Checking if "multinode-012284-m02" exists ...
	I1208 04:25:52.279853  152630 main.go:143] libmachine: domain multinode-012284-m02 has defined MAC address 52:54:00:0b:67:2f in network mk-multinode-012284
	I1208 04:25:52.280226  152630 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:67:2f", ip: ""} in network mk-multinode-012284: {Iface:virbr1 ExpiryTime:2025-12-08 05:24:25 +0000 UTC Type:0 Mac:52:54:00:0b:67:2f Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-012284-m02 Clientid:01:52:54:00:0b:67:2f}
	I1208 04:25:52.280267  152630 main.go:143] libmachine: domain multinode-012284-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:0b:67:2f in network mk-multinode-012284
	I1208 04:25:52.280423  152630 host.go:66] Checking if "multinode-012284-m02" exists ...
	I1208 04:25:52.280634  152630 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1208 04:25:52.282653  152630 main.go:143] libmachine: domain multinode-012284-m02 has defined MAC address 52:54:00:0b:67:2f in network mk-multinode-012284
	I1208 04:25:52.283079  152630 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:67:2f", ip: ""} in network mk-multinode-012284: {Iface:virbr1 ExpiryTime:2025-12-08 05:24:25 +0000 UTC Type:0 Mac:52:54:00:0b:67:2f Iaid: IPaddr:192.168.39.67 Prefix:24 Hostname:multinode-012284-m02 Clientid:01:52:54:00:0b:67:2f}
	I1208 04:25:52.283103  152630 main.go:143] libmachine: domain multinode-012284-m02 has defined IP address 192.168.39.67 and MAC address 52:54:00:0b:67:2f in network mk-multinode-012284
	I1208 04:25:52.283219  152630 sshutil.go:53] new ssh client: &{IP:192.168.39.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21409-125868/.minikube/machines/multinode-012284-m02/id_rsa Username:docker}
	I1208 04:25:52.363410  152630 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1208 04:25:52.380383  152630 status.go:176] multinode-012284-m02 status: &{Name:multinode-012284-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1208 04:25:52.380421  152630 status.go:174] checking status of multinode-012284-m03 ...
	I1208 04:25:52.381936  152630 status.go:371] multinode-012284-m03 host status = "Stopped" (err=<nil>)
	I1208 04:25:52.381955  152630 status.go:384] host is not running, skipping remaining checks
	I1208 04:25:52.381960  152630 status.go:176] multinode-012284-m03 status: &{Name:multinode-012284-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (37.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-012284 node start m03 -v=5 --alsologtostderr: (37.178361335s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (37.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (326.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-012284
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-012284
E1208 04:27:01.999678  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:27:39.321038  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:28:07.334355  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-012284: (2m56.869320663s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-012284 --wait=true -v=5 --alsologtostderr
E1208 04:29:36.256434  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-012284 --wait=true -v=5 --alsologtostderr: (2m29.191929823s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-012284
--- PASS: TestMultiNode/serial/RestartKeepsNodes (326.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-012284 node delete m03: (2.058403668s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (167.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 stop
E1208 04:32:01.995375  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:33:07.334254  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:34:36.256454  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-012284 stop: (2m47.325560711s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-012284 status: exit status 7 (65.629314ms)

                                                
                                                
-- stdout --
	multinode-012284
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-012284-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-012284 status --alsologtostderr: exit status 7 (64.521242ms)

                                                
                                                
-- stdout --
	multinode-012284
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-012284-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 04:34:46.180285  155108 out.go:360] Setting OutFile to fd 1 ...
	I1208 04:34:46.180520  155108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:34:46.180528  155108 out.go:374] Setting ErrFile to fd 2...
	I1208 04:34:46.180532  155108 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:34:46.180721  155108 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	I1208 04:34:46.180881  155108 out.go:368] Setting JSON to false
	I1208 04:34:46.180916  155108 mustload.go:66] Loading cluster: multinode-012284
	I1208 04:34:46.181036  155108 notify.go:221] Checking for updates...
	I1208 04:34:46.181280  155108 config.go:182] Loaded profile config "multinode-012284": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 04:34:46.181295  155108 status.go:174] checking status of multinode-012284 ...
	I1208 04:34:46.183311  155108 status.go:371] multinode-012284 host status = "Stopped" (err=<nil>)
	I1208 04:34:46.183327  155108 status.go:384] host is not running, skipping remaining checks
	I1208 04:34:46.183332  155108 status.go:176] multinode-012284 status: &{Name:multinode-012284 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1208 04:34:46.183349  155108 status.go:174] checking status of multinode-012284-m02 ...
	I1208 04:34:46.184434  155108 status.go:371] multinode-012284-m02 host status = "Stopped" (err=<nil>)
	I1208 04:34:46.184447  155108 status.go:384] host is not running, skipping remaining checks
	I1208 04:34:46.184451  155108 status.go:176] multinode-012284-m02 status: &{Name:multinode-012284-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (167.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (113.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-012284 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1208 04:36:10.403127  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-012284 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.855265415s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-012284 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (113.30s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-012284
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-012284-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-012284-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (74.246472ms)

                                                
                                                
-- stdout --
	* [multinode-012284-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-012284-m02' is duplicated with machine name 'multinode-012284-m02' in profile 'multinode-012284'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-012284-m03 --driver=kvm2  --container-runtime=crio
E1208 04:36:45.078621  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:37:01.997914  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-012284-m03 --driver=kvm2  --container-runtime=crio: (37.109635026s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-012284
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-012284: exit status 80 (205.393028ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-012284 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-012284-m03 already exists in multinode-012284-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-012284-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.27s)

                                                
                                    
x
+
TestScheduledStopUnix (107.6s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-150928 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-150928 --memory=3072 --driver=kvm2  --container-runtime=crio: (35.981075652s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-150928 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1208 04:40:26.061354  157520 out.go:360] Setting OutFile to fd 1 ...
	I1208 04:40:26.061462  157520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:40:26.061470  157520 out.go:374] Setting ErrFile to fd 2...
	I1208 04:40:26.061474  157520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:40:26.061669  157520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	I1208 04:40:26.061917  157520 out.go:368] Setting JSON to false
	I1208 04:40:26.061999  157520 mustload.go:66] Loading cluster: scheduled-stop-150928
	I1208 04:40:26.062287  157520 config.go:182] Loaded profile config "scheduled-stop-150928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 04:40:26.062350  157520 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/config.json ...
	I1208 04:40:26.062534  157520 mustload.go:66] Loading cluster: scheduled-stop-150928
	I1208 04:40:26.062631  157520 config.go:182] Loaded profile config "scheduled-stop-150928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-150928 -n scheduled-stop-150928
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-150928 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1208 04:40:26.343755  157566 out.go:360] Setting OutFile to fd 1 ...
	I1208 04:40:26.344031  157566 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:40:26.344039  157566 out.go:374] Setting ErrFile to fd 2...
	I1208 04:40:26.344043  157566 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:40:26.344252  157566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	I1208 04:40:26.344506  157566 out.go:368] Setting JSON to false
	I1208 04:40:26.344714  157566 daemonize_unix.go:73] killing process 157555 as it is an old scheduled stop
	I1208 04:40:26.344821  157566 mustload.go:66] Loading cluster: scheduled-stop-150928
	I1208 04:40:26.345334  157566 config.go:182] Loaded profile config "scheduled-stop-150928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 04:40:26.345422  157566 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/config.json ...
	I1208 04:40:26.345650  157566 mustload.go:66] Loading cluster: scheduled-stop-150928
	I1208 04:40:26.345777  157566 config.go:182] Loaded profile config "scheduled-stop-150928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1208 04:40:26.351127  129900 retry.go:31] will retry after 140.84µs: open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/pid: no such file or directory
I1208 04:40:26.352295  129900 retry.go:31] will retry after 99.23µs: open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/pid: no such file or directory
I1208 04:40:26.353447  129900 retry.go:31] will retry after 161.086µs: open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/pid: no such file or directory
I1208 04:40:26.354583  129900 retry.go:31] will retry after 437.161µs: open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/pid: no such file or directory
I1208 04:40:26.355700  129900 retry.go:31] will retry after 350.54µs: open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/pid: no such file or directory
I1208 04:40:26.356832  129900 retry.go:31] will retry after 1.050171ms: open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/pid: no such file or directory
I1208 04:40:26.357956  129900 retry.go:31] will retry after 1.626834ms: open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/pid: no such file or directory
I1208 04:40:26.360153  129900 retry.go:31] will retry after 944.678µs: open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/pid: no such file or directory
I1208 04:40:26.361287  129900 retry.go:31] will retry after 1.471681ms: open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/pid: no such file or directory
I1208 04:40:26.363485  129900 retry.go:31] will retry after 2.520576ms: open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/pid: no such file or directory
I1208 04:40:26.366686  129900 retry.go:31] will retry after 4.82888ms: open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/pid: no such file or directory
I1208 04:40:26.371890  129900 retry.go:31] will retry after 4.721814ms: open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/pid: no such file or directory
I1208 04:40:26.377106  129900 retry.go:31] will retry after 9.811483ms: open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/pid: no such file or directory
I1208 04:40:26.387326  129900 retry.go:31] will retry after 22.107ms: open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/pid: no such file or directory
I1208 04:40:26.409564  129900 retry.go:31] will retry after 36.884763ms: open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/pid: no such file or directory
I1208 04:40:26.446885  129900 retry.go:31] will retry after 35.566182ms: open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-150928 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-150928 -n scheduled-stop-150928
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-150928
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-150928 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1208 04:40:52.045527  157715 out.go:360] Setting OutFile to fd 1 ...
	I1208 04:40:52.045778  157715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:40:52.045786  157715 out.go:374] Setting ErrFile to fd 2...
	I1208 04:40:52.045790  157715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:40:52.045970  157715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	I1208 04:40:52.046198  157715 out.go:368] Setting JSON to false
	I1208 04:40:52.046282  157715 mustload.go:66] Loading cluster: scheduled-stop-150928
	I1208 04:40:52.046603  157715 config.go:182] Loaded profile config "scheduled-stop-150928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1208 04:40:52.046688  157715 profile.go:143] Saving config to /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/scheduled-stop-150928/config.json ...
	I1208 04:40:52.046876  157715 mustload.go:66] Loading cluster: scheduled-stop-150928
	I1208 04:40:52.046996  157715 config.go:182] Loaded profile config "scheduled-stop-150928": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-150928
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-150928: exit status 7 (64.101372ms)

                                                
                                                
-- stdout --
	scheduled-stop-150928
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-150928 -n scheduled-stop-150928
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-150928 -n scheduled-stop-150928: exit status 7 (61.99987ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-150928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-150928
--- PASS: TestScheduledStopUnix (107.60s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (394.36s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.1710366061 start -p running-upgrade-183390 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1208 04:42:01.995742  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.1710366061 start -p running-upgrade-183390 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m32.5633952s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-183390 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-183390 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (4m56.987756155s)
helpers_test.go:175: Cleaning up "running-upgrade-183390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-183390
--- PASS: TestRunningBinaryUpgrade (394.36s)

                                                
                                    
x
+
TestKubernetesUpgrade (171.59s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-186554 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-186554 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.19909054s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-186554
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-186554: (2.090855046s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-186554 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-186554 status --format={{.Host}}: exit status 7 (81.903946ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-186554 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-186554 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.21829766s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-186554 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-186554 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-186554 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (90.268588ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-186554] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-186554
	    minikube start -p kubernetes-upgrade-186554 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1865542 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-186554 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-186554 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-186554 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.849840084s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-186554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-186554
--- PASS: TestKubernetesUpgrade (171.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-161907 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-161907 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (93.581949ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-161907] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (76.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-161907 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-161907 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.597014476s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-161907 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (76.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-161907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-161907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (28.619580707s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-161907 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-161907 status -o json: exit status 2 (214.914313ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-161907","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-161907
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-127227 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-127227 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (122.347696ms)

                                                
                                                
-- stdout --
	* [false-127227] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21409
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1208 04:43:00.692337  159817 out.go:360] Setting OutFile to fd 1 ...
	I1208 04:43:00.692622  159817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:43:00.692633  159817 out.go:374] Setting ErrFile to fd 2...
	I1208 04:43:00.692637  159817 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1208 04:43:00.692820  159817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21409-125868/.minikube/bin
	I1208 04:43:00.693302  159817 out.go:368] Setting JSON to false
	I1208 04:43:00.694293  159817 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5125,"bootTime":1765163856,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1208 04:43:00.694351  159817 start.go:143] virtualization: kvm guest
	I1208 04:43:00.695872  159817 out.go:179] * [false-127227] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1208 04:43:00.697337  159817 out.go:179]   - MINIKUBE_LOCATION=21409
	I1208 04:43:00.697325  159817 notify.go:221] Checking for updates...
	I1208 04:43:00.698713  159817 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1208 04:43:00.699912  159817 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21409-125868/kubeconfig
	I1208 04:43:00.700972  159817 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21409-125868/.minikube
	I1208 04:43:00.702066  159817 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1208 04:43:00.703061  159817 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1208 04:43:00.704402  159817 config.go:182] Loaded profile config "NoKubernetes-161907": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1208 04:43:00.704502  159817 config.go:182] Loaded profile config "kubernetes-upgrade-186554": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1208 04:43:00.704569  159817 config.go:182] Loaded profile config "running-upgrade-183390": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1208 04:43:00.704675  159817 driver.go:422] Setting default libvirt URI to qemu:///system
	I1208 04:43:00.740715  159817 out.go:179] * Using the kvm2 driver based on user configuration
	I1208 04:43:00.741695  159817 start.go:309] selected driver: kvm2
	I1208 04:43:00.741713  159817 start.go:927] validating driver "kvm2" against <nil>
	I1208 04:43:00.741728  159817 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1208 04:43:00.743797  159817 out.go:203] 
	W1208 04:43:00.744734  159817 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1208 04:43:00.745644  159817 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-127227 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-127227

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-127227

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-127227

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-127227

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-127227

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-127227

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-127227

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-127227

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-127227

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-127227

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-127227

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-127227" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-127227" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-125868/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Dec 2025 04:42:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.159:8443
name: NoKubernetes-161907
contexts:
- context:
cluster: NoKubernetes-161907
extensions:
- extension:
last-update: Mon, 08 Dec 2025 04:42:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-161907
name: NoKubernetes-161907
current-context: NoKubernetes-161907
kind: Config
users:
- name: NoKubernetes-161907
user:
client-certificate: /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/NoKubernetes-161907/client.crt
client-key: /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/NoKubernetes-161907/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-127227

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127227"

                                                
                                                
----------------------- debugLogs end: false-127227 [took: 4.895810359s] --------------------------------
helpers_test.go:175: Cleaning up "false-127227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-127227
--- PASS: TestNetworkPlugins/group/false (5.20s)

                                                
                                    
x
+
TestISOImage/Setup (29.42s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-927656 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-927656 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.420521956s)
--- PASS: TestISOImage/Setup (29.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (37.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-161907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-161907 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (37.99174598s)
--- PASS: TestNoKubernetes/serial/Start (37.99s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21409-125868/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-161907 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-161907 "sudo systemctl is-active --quiet service kubelet": exit status 1 (165.20241ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (25.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (15.569656685s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E1208 04:44:19.322664  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (9.478011446s)
--- PASS: TestNoKubernetes/serial/ProfileList (25.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-161907
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-161907: (1.327389811s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (18.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-161907 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-161907 --driver=kvm2  --container-runtime=crio: (18.324173139s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (18.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-161907 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-161907 "sudo systemctl is-active --quiet service kubelet": exit status 1 (180.767386ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (98.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3358604342 start -p stopped-upgrade-675832 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3358604342 start -p stopped-upgrade-675832 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m0.52579952s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3358604342 -p stopped-upgrade-675832 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3358604342 -p stopped-upgrade-675832 stop: (1.772601218s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-675832 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1208 04:47:02.000011  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-675832 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.211490272s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (98.51s)

                                                
                                    
x
+
TestPause/serial/Start (86.58s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-093469 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-093469 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m26.579434895s)
--- PASS: TestPause/serial/Start (86.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-675832
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-675832: (1.222838829s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (57.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-127227 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-127227 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (57.299301263s)
--- PASS: TestNetworkPlugins/group/auto/Start (57.30s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-093469 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-093469 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.644618748s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-127227 "pgrep -a kubelet"
I1208 04:48:05.861417  129900 config.go:182] Loaded profile config "auto-127227": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-127227 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vpl7l" [8dcaf3a6-ffe6-4f12-aea4-c12d1203c973] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1208 04:48:07.333966  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-vpl7l" [8dcaf3a6-ffe6-4f12-aea4-c12d1203c973] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004606374s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-093469 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.23s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-093469 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-093469 --output=json --layout=cluster: exit status 2 (232.093159ms)

                                                
                                                
-- stdout --
	{"Name":"pause-093469","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-093469","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (56.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-127227 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-127227 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (56.43966801s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (56.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-093469 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.75s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-093469 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.75s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.93s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-093469 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.55s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (101.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-127227 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-127227 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m41.397701895s)
--- PASS: TestNetworkPlugins/group/calico/Start (101.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-127227 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-127227 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-127227 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (89.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-127227 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-127227 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m29.142452734s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (89.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-hsdpr" [87f1d3e4-d22b-488e-9c19-60a6ee3ebf91] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00513467s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-127227 "pgrep -a kubelet"
I1208 04:49:14.901112  129900 config.go:182] Loaded profile config "kindnet-127227": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-127227 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context kindnet-127227 replace --force -f testdata/netcat-deployment.yaml: (1.342056299s)
I1208 04:49:16.256762  129900 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1208 04:49:16.282055  129900 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5btcs" [1412c74e-9b16-4f5f-b51d-419b3286db81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5btcs" [1412c74e-9b16-4f5f-b51d-419b3286db81] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005253158s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-127227 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-127227 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-127227 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-127227 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-127227 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m29.89124811s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (85.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-127227 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-127227 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m25.606196891s)
--- PASS: TestNetworkPlugins/group/flannel/Start (85.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-5n47g" [8513a7f9-ae51-4610-be7f-ac3a3d5a1bfb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.009164825s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-127227 "pgrep -a kubelet"
I1208 04:50:01.677680  129900 config.go:182] Loaded profile config "custom-flannel-127227": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-127227 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rgfrs" [936bc6b2-7431-4e2a-a10b-cdb3de274603] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rgfrs" [936bc6b2-7431-4e2a-a10b-cdb3de274603] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.005353001s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-127227 "pgrep -a kubelet"
I1208 04:50:02.833047  129900 config.go:182] Loaded profile config "calico-127227": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-127227 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4wvx5" [9c9c0c8a-4dd4-48b1-ac11-435e52cac2fe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4wvx5" [9c9c0c8a-4dd4-48b1-ac11-435e52cac2fe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004377089s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-127227 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-127227 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-127227 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-127227 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-127227 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-127227 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-127227 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-127227 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m26.663458624s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (74.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-185797 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-185797 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m14.872539418s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (74.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-127227 "pgrep -a kubelet"
I1208 04:51:07.994296  129900 config.go:182] Loaded profile config "enable-default-cni-127227": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-127227 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-t7w6p" [90a60b35-b75d-484f-b5fe-2911ad35e1a2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-t7w6p" [90a60b35-b75d-484f-b5fe-2911ad35e1a2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.006031606s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-fcbrc" [5587ad8b-8f2c-47f3-8110-772647482f33] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004156419s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-127227 "pgrep -a kubelet"
I1208 04:51:15.118384  129900 config.go:182] Loaded profile config "flannel-127227": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-127227 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-k5vx9" [fecd9f09-99d3-4dd1-8a93-1613fde16794] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-k5vx9" [fecd9f09-99d3-4dd1-8a93-1613fde16794] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004831628s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-127227 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-127227 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-127227 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-127227 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-127227 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-127227 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-862077 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-862077 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m34.999132883s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (95.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (101.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-890156 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-890156 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m41.707249505s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (101.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-185797 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [14af3481-39a0-4736-94d8-a4dd0ef3078a] Pending
helpers_test.go:352: "busybox" [14af3481-39a0-4736-94d8-a4dd0ef3078a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [14af3481-39a0-4736-94d8-a4dd0ef3078a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.005584255s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-185797 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-127227 "pgrep -a kubelet"
I1208 04:51:59.494959  129900 config.go:182] Loaded profile config "bridge-127227": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-127227 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cz6rw" [613c1963-c87b-4036-8caf-e1d1128a8864] Pending
E1208 04:52:01.995384  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-cz6rw" [613c1963-c87b-4036-8caf-e1d1128a8864] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cz6rw" [613c1963-c87b-4036-8caf-e1d1128a8864] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.006469691s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-185797 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-185797 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.280168553s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-185797 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (70.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-185797 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-185797 --alsologtostderr -v=3: (1m10.116898828s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (70.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-127227 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-127227 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-127227 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-770356 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1208 04:52:50.405195  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:53:06.115408  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/auto-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:53:06.121794  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/auto-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:53:06.133187  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/auto-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:53:06.154630  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/auto-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:53:06.196104  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/auto-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:53:06.277998  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/auto-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:53:06.439784  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/auto-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:53:06.762108  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/auto-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:53:07.333429  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:53:07.403978  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/auto-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:53:08.685551  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/auto-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-770356 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m22.113096834s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-862077 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [28f00bf6-5aac-4ad5-b00e-9bd41bb773c7] Pending
helpers_test.go:352: "busybox" [28f00bf6-5aac-4ad5-b00e-9bd41bb773c7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1208 04:53:11.246976  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/auto-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [28f00bf6-5aac-4ad5-b00e-9bd41bb773c7] Running
E1208 04:53:16.369023  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/auto-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 12.004336795s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-862077 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (12.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-185797 -n old-k8s-version-185797
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-185797 -n old-k8s-version-185797: exit status 7 (65.761341ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-185797 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (42.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-185797 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-185797 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (41.841523124s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-185797 -n old-k8s-version-185797
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (42.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-862077 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-862077 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (76.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-862077 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-862077 --alsologtostderr -v=3: (1m16.966121391s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (76.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-890156 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [98ab961c-5cfd-430a-a17e-422369fde743] Pending
E1208 04:53:25.080565  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/addons-301052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [98ab961c-5cfd-430a-a17e-422369fde743] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1208 04:53:26.610814  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/auto-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [98ab961c-5cfd-430a-a17e-422369fde743] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.004191667s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-890156 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-890156 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-890156 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.005841846s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-890156 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (86.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-890156 --alsologtostderr -v=3
E1208 04:53:47.092222  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/auto-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-890156 --alsologtostderr -v=3: (1m26.807207226s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (86.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-770356 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [248d8b09-91d1-4a0e-b077-764d01b3d1d6] Pending
helpers_test.go:352: "busybox" [248d8b09-91d1-4a0e-b077-764d01b3d1d6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [248d8b09-91d1-4a0e-b077-764d01b3d1d6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003964336s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-770356 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2fcrg" [8e8efa07-50f3-4c70-96bb-ae885f3f23bc] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2fcrg" [8e8efa07-50f3-4c70-96bb-ae885f3f23bc] Running
E1208 04:54:08.420258  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/kindnet-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:08.426630  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/kindnet-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:08.437981  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/kindnet-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:08.459318  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/kindnet-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:08.500674  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/kindnet-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:08.582095  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/kindnet-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:08.743663  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/kindnet-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.004406446s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-770356 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-770356 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (75.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-770356 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-770356 --alsologtostderr -v=3: (1m15.586814855s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (75.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2fcrg" [8e8efa07-50f3-4c70-96bb-ae885f3f23bc] Running
E1208 04:54:09.065418  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/kindnet-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:09.707121  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/kindnet-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:10.988749  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/kindnet-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:13.550982  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/kindnet-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003189364s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-185797 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-185797 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-185797 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-185797 -n old-k8s-version-185797
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-185797 -n old-k8s-version-185797: exit status 2 (208.391999ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-185797 -n old-k8s-version-185797
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-185797 -n old-k8s-version-185797: exit status 2 (214.310481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-185797 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-185797 -n old-k8s-version-185797
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-185797 -n old-k8s-version-185797
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-815514 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1208 04:54:18.673269  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/kindnet-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:28.054135  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/auto-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:28.915111  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/kindnet-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:36.253541  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-194253/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-815514 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (40.366312972s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-862077 -n no-preload-862077
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-862077 -n no-preload-862077: exit status 7 (73.336679ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-862077 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-862077 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1208 04:54:49.397212  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/kindnet-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:56.603393  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/calico-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:56.610114  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/calico-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:56.621934  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/calico-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:56.643454  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/calico-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:56.685050  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/calico-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:56.766952  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/calico-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:56.928617  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/calico-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:57.250421  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/calico-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:54:57.891833  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/calico-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-862077 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (51.292499039s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-862077 -n no-preload-862077
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-815514 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-815514 --alsologtostderr -v=3
E1208 04:54:59.173963  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/calico-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:55:01.736045  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/calico-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:55:01.946733  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/custom-flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:55:01.953173  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/custom-flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:55:01.964710  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/custom-flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:55:01.986120  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/custom-flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:55:02.027781  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/custom-flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:55:02.109341  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/custom-flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:55:02.271033  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/custom-flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:55:02.592833  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/custom-flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:55:03.234540  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/custom-flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:55:04.517061  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/custom-flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-815514 --alsologtostderr -v=3: (7.244161517s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-890156 -n embed-certs-890156
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-890156 -n embed-certs-890156: exit status 7 (76.233053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-890156 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-890156 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-890156 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (44.606263493s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-890156 -n embed-certs-890156
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-815514 -n newest-cni-815514
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-815514 -n newest-cni-815514: exit status 7 (74.781077ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-815514 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (85.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-815514 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1208 04:55:06.858203  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/calico-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:55:07.078924  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/custom-flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:55:12.200297  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/custom-flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-815514 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m25.565875337s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-815514 -n newest-cni-815514
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (85.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-770356 -n default-k8s-diff-port-770356
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-770356 -n default-k8s-diff-port-770356: exit status 7 (93.329657ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-770356 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (66.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-770356 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1208 04:55:17.100196  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/calico-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:55:22.441665  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/custom-flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:55:30.359153  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/kindnet-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-770356 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m5.811063734s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-770356 -n default-k8s-diff-port-770356
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (66.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-7wfq6" [adfbf765-4524-4c6d-a507-06f8d19c2f42] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-7wfq6" [adfbf765-4524-4c6d-a507-06f8d19c2f42] Running
E1208 04:55:37.582306  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/calico-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.004751429s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-b84665fb8-7wfq6" [adfbf765-4524-4c6d-a507-06f8d19c2f42] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004304602s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-862077 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E1208 04:55:42.923563  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/custom-flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-862077 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-862077 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-862077 --alsologtostderr -v=1: (1.029080498s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-862077 -n no-preload-862077
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-862077 -n no-preload-862077: exit status 2 (248.868839ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-862077 -n no-preload-862077
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-862077 -n no-preload-862077: exit status 2 (239.663022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-862077 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-862077 -n no-preload-862077
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-862077 -n no-preload-862077
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.58s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.2s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1765151505-21409
iso_test.go:118:   kicbase_version: v0.0.48-1764843390-22032
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: 0d7c1d9864cc7aa82e32494e32331ce8be405026
--- PASS: TestISOImage/VersionJSON (0.20s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.19s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-927656 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
E1208 04:55:49.977056  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/auto-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/eBPFSupport (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mxdtd" [bc008f3c-7d94-4676-85cb-9f8590210724] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mxdtd" [bc008f3c-7d94-4676-85cb-9f8590210724] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004544485s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mxdtd" [bc008f3c-7d94-4676-85cb-9f8590210724] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.034772612s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-890156 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-890156 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-890156 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-890156 -n embed-certs-890156
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-890156 -n embed-certs-890156: exit status 2 (232.774343ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-890156 -n embed-certs-890156
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-890156 -n embed-certs-890156: exit status 2 (225.941325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-890156 --alsologtostderr -v=1
E1208 04:56:08.234522  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/enable-default-cni-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:56:08.240951  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/enable-default-cni-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:56:08.252325  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/enable-default-cni-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:56:08.273719  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/enable-default-cni-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:56:08.315195  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/enable-default-cni-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:56:08.396504  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/enable-default-cni-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-890156 -n embed-certs-890156
E1208 04:56:08.558426  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/enable-default-cni-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-890156 -n embed-certs-890156
E1208 04:56:08.880520  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/enable-default-cni-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:56:08.926264  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:56:08.932679  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:56:08.944128  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:56:08.965563  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:56:09.007186  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lwgpf" [a5a3fa53-2cd0-44ef-a327-7d5b9d292b59] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1208 04:56:23.885024  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/custom-flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1208 04:56:28.730491  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/enable-default-cni-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lwgpf" [a5a3fa53-2cd0-44ef-a327-7d5b9d292b59] Running
E1208 04:56:29.421498  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/flannel-127227/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.005298927s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-815514 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-815514 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-815514 -n newest-cni-815514
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-815514 -n newest-cni-815514: exit status 2 (209.984136ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-815514 -n newest-cni-815514
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-815514 -n newest-cni-815514: exit status 2 (208.612129ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-815514 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-815514 -n newest-cni-815514
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-815514 -n newest-cni-815514
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lwgpf" [a5a3fa53-2cd0-44ef-a327-7d5b9d292b59] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003699575s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-770356 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-770356 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-770356 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-770356 -n default-k8s-diff-port-770356
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-770356 -n default-k8s-diff-port-770356: exit status 2 (209.742179ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-770356 -n default-k8s-diff-port-770356
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-770356 -n default-k8s-diff-port-770356: exit status 2 (203.728024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-770356 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-770356 -n default-k8s-diff-port-770356
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-770356 -n default-k8s-diff-port-770356
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.36s)

                                                
                                    

Test skip (52/437)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.3
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
361 TestNetworkPlugins/group/kubenet 4.44
369 TestNetworkPlugins/group/cilium 4.44
397 TestStartStop/group/disable-driver-mounts 0.2
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-301052 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-127227 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-127227

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-127227

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-127227

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-127227

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-127227

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-127227

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-127227

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-127227

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-127227

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-127227

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-127227

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-127227" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-127227" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-125868/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Dec 2025 04:42:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.159:8443
name: NoKubernetes-161907
contexts:
- context:
cluster: NoKubernetes-161907
extensions:
- extension:
last-update: Mon, 08 Dec 2025 04:42:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-161907
name: NoKubernetes-161907
current-context: NoKubernetes-161907
kind: Config
users:
- name: NoKubernetes-161907
user:
client-certificate: /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/NoKubernetes-161907/client.crt
client-key: /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/NoKubernetes-161907/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-127227

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127227"

                                                
                                                
----------------------- debugLogs end: kubenet-127227 [took: 4.259902224s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-127227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-127227
--- SKIP: TestNetworkPlugins/group/kubenet (4.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
E1208 04:43:07.333738  129900 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/functional-940895/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:615: 
----------------------- debugLogs start: cilium-127227 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-127227

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-127227

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-127227

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-127227

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-127227

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-127227

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-127227

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-127227

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-127227

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-127227

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-127227

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-127227" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-127227

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-127227

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-127227

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-127227

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-127227" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-127227" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21409-125868/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 08 Dec 2025 04:42:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.61.159:8443
name: NoKubernetes-161907
contexts:
- context:
cluster: NoKubernetes-161907
extensions:
- extension:
last-update: Mon, 08 Dec 2025 04:42:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-161907
name: NoKubernetes-161907
current-context: NoKubernetes-161907
kind: Config
users:
- name: NoKubernetes-161907
user:
client-certificate: /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/NoKubernetes-161907/client.crt
client-key: /home/jenkins/minikube-integration/21409-125868/.minikube/profiles/NoKubernetes-161907/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-127227

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-127227" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127227"

                                                
                                                
----------------------- debugLogs end: cilium-127227 [took: 4.258312546s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-127227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-127227
--- SKIP: TestNetworkPlugins/group/cilium (4.44s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-778391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-778391
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard