Test Report: KVM_Linux_crio 22000

                    
                      3f3a61283993ee602bd323c44b704727ac3a4ece:2025-11-29:42558
                    
                

Test fail (5/345)

Order failed test Duration
37 TestAddons/parallel/Ingress 159.62
121 TestFunctional/parallel/ImageCommands/ImageBuild 6.22
130 TestFunctional/parallel/ImageCommands/ImageRemove 3.39
244 TestPreload 153.63
300 TestPause/serial/SecondStartNoReconfiguration 53.07
x
+
TestAddons/parallel/Ingress (159.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-213983 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-213983 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-213983 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [91d59907-f6f5-4a62-a39f-f6c5de4fe9d9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [91d59907-f6f5-4a62-a39f-f6c5de4fe9d9] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004096513s
I1129 08:32:01.794905    9613 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-213983 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.725657623s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-213983 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.35
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-213983 -n addons-213983
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-213983 logs -n 25: (1.179590517s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-915524                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-915524 │ jenkins │ v1.37.0 │ 29 Nov 25 08:29 UTC │ 29 Nov 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-065244 --alsologtostderr --binary-mirror http://127.0.0.1:34259 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-065244 │ jenkins │ v1.37.0 │ 29 Nov 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-065244                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-065244 │ jenkins │ v1.37.0 │ 29 Nov 25 08:29 UTC │ 29 Nov 25 08:29 UTC │
	│ addons  │ enable dashboard -p addons-213983                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-213983                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:29 UTC │                     │
	│ start   │ -p addons-213983 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:29 UTC │ 29 Nov 25 08:31 UTC │
	│ addons  │ addons-213983 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:31 UTC │ 29 Nov 25 08:31 UTC │
	│ addons  │ addons-213983 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:31 UTC │ 29 Nov 25 08:31 UTC │
	│ addons  │ enable headlamp -p addons-213983 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:31 UTC │ 29 Nov 25 08:31 UTC │
	│ addons  │ addons-213983 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:31 UTC │ 29 Nov 25 08:31 UTC │
	│ addons  │ addons-213983 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:31 UTC │ 29 Nov 25 08:31 UTC │
	│ addons  │ addons-213983 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:31 UTC │ 29 Nov 25 08:31 UTC │
	│ addons  │ addons-213983 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:31 UTC │ 29 Nov 25 08:31 UTC │
	│ ip      │ addons-213983 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:31 UTC │ 29 Nov 25 08:31 UTC │
	│ addons  │ addons-213983 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:31 UTC │ 29 Nov 25 08:31 UTC │
	│ ssh     │ addons-213983 ssh cat /opt/local-path-provisioner/pvc-201f1235-cf8d-4120-9ec7-7fe42aca63d3_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:31 UTC │ 29 Nov 25 08:31 UTC │
	│ addons  │ addons-213983 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:31 UTC │ 29 Nov 25 08:32 UTC │
	│ addons  │ addons-213983 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:31 UTC │ 29 Nov 25 08:32 UTC │
	│ ssh     │ addons-213983 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:32 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-213983                                                                                                                                                                                                                                                                                                                                                                                         │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:32 UTC │ 29 Nov 25 08:32 UTC │
	│ addons  │ addons-213983 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:32 UTC │ 29 Nov 25 08:32 UTC │
	│ addons  │ addons-213983 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:32 UTC │ 29 Nov 25 08:32 UTC │
	│ addons  │ addons-213983 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:32 UTC │ 29 Nov 25 08:32 UTC │
	│ addons  │ addons-213983 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:32 UTC │ 29 Nov 25 08:32 UTC │
	│ ip      │ addons-213983 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-213983        │ jenkins │ v1.37.0 │ 29 Nov 25 08:34 UTC │ 29 Nov 25 08:34 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 08:29:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 08:29:04.941650   10285 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:29:04.941890   10285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:29:04.941898   10285 out.go:374] Setting ErrFile to fd 2...
	I1129 08:29:04.941903   10285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:29:04.942103   10285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
	I1129 08:29:04.942593   10285 out.go:368] Setting JSON to false
	I1129 08:29:04.943357   10285 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":689,"bootTime":1764404256,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 08:29:04.943413   10285 start.go:143] virtualization: kvm guest
	I1129 08:29:04.945332   10285 out.go:179] * [addons-213983] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 08:29:04.946554   10285 notify.go:221] Checking for updates...
	I1129 08:29:04.946582   10285 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 08:29:04.947940   10285 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:29:04.949265   10285 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig
	I1129 08:29:04.950672   10285 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube
	I1129 08:29:04.954004   10285 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 08:29:04.955263   10285 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 08:29:04.956553   10285 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:29:04.986806   10285 out.go:179] * Using the kvm2 driver based on user configuration
	I1129 08:29:04.987868   10285 start.go:309] selected driver: kvm2
	I1129 08:29:04.987883   10285 start.go:927] validating driver "kvm2" against <nil>
	I1129 08:29:04.987892   10285 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 08:29:04.988598   10285 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 08:29:04.988887   10285 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 08:29:04.988916   10285 cni.go:84] Creating CNI manager for ""
	I1129 08:29:04.988972   10285 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1129 08:29:04.988984   10285 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1129 08:29:04.989044   10285 start.go:353] cluster config:
	{Name:addons-213983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-213983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1129 08:29:04.989164   10285 iso.go:125] acquiring lock: {Name:mk0184b92a126aea44cd27d4836c247b817b0333 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 08:29:04.991686   10285 out.go:179] * Starting "addons-213983" primary control-plane node in "addons-213983" cluster
	I1129 08:29:04.992966   10285 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 08:29:04.992994   10285 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 08:29:04.993002   10285 cache.go:65] Caching tarball of preloaded images
	I1129 08:29:04.993086   10285 preload.go:238] Found /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 08:29:04.993098   10285 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 08:29:04.993415   10285 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/config.json ...
	I1129 08:29:04.993440   10285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/config.json: {Name:mk4eeb675f5987feebb2a455ac9a8d4515762862 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:04.993613   10285 start.go:360] acquireMachinesLock for addons-213983: {Name:mke0bd376b87e419ebada00803bbcbb9230316d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1129 08:29:04.993698   10285 start.go:364] duration metric: took 61.527µs to acquireMachinesLock for "addons-213983"
	I1129 08:29:04.993720   10285 start.go:93] Provisioning new machine with config: &{Name:addons-213983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-213983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 08:29:04.993795   10285 start.go:125] createHost starting for "" (driver="kvm2")
	I1129 08:29:04.995361   10285 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1129 08:29:04.995524   10285 start.go:159] libmachine.API.Create for "addons-213983" (driver="kvm2")
	I1129 08:29:04.995560   10285 client.go:173] LocalClient.Create starting
	I1129 08:29:04.995651   10285 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem
	I1129 08:29:05.146317   10285 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem
	I1129 08:29:05.209759   10285 main.go:143] libmachine: creating domain...
	I1129 08:29:05.209780   10285 main.go:143] libmachine: creating network...
	I1129 08:29:05.211214   10285 main.go:143] libmachine: found existing default network
	I1129 08:29:05.211403   10285 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1129 08:29:05.211928   10285 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cf68e0}
	I1129 08:29:05.212035   10285 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-213983</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1129 08:29:05.217904   10285 main.go:143] libmachine: creating private network mk-addons-213983 192.168.39.0/24...
	I1129 08:29:05.282007   10285 main.go:143] libmachine: private network mk-addons-213983 192.168.39.0/24 created
	I1129 08:29:05.282335   10285 main.go:143] libmachine: <network>
	  <name>mk-addons-213983</name>
	  <uuid>dc23dbc1-fe82-41d5-8043-39939140e120</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:f0:51:4d'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1129 08:29:05.282364   10285 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983 ...
	I1129 08:29:05.282397   10285 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22000-5651/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1129 08:29:05.282410   10285 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22000-5651/.minikube
	I1129 08:29:05.282500   10285 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22000-5651/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22000-5651/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1129 08:29:05.547956   10285 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa...
	I1129 08:29:05.692130   10285 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/addons-213983.rawdisk...
	I1129 08:29:05.692172   10285 main.go:143] libmachine: Writing magic tar header
	I1129 08:29:05.692191   10285 main.go:143] libmachine: Writing SSH key tar header
	I1129 08:29:05.692287   10285 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983 ...
	I1129 08:29:05.692354   10285 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983
	I1129 08:29:05.692383   10285 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983 (perms=drwx------)
	I1129 08:29:05.692403   10285 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22000-5651/.minikube/machines
	I1129 08:29:05.692423   10285 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22000-5651/.minikube/machines (perms=drwxr-xr-x)
	I1129 08:29:05.692439   10285 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22000-5651/.minikube
	I1129 08:29:05.692457   10285 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22000-5651/.minikube (perms=drwxr-xr-x)
	I1129 08:29:05.692467   10285 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22000-5651
	I1129 08:29:05.692477   10285 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22000-5651 (perms=drwxrwxr-x)
	I1129 08:29:05.692488   10285 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1129 08:29:05.692504   10285 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1129 08:29:05.692523   10285 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1129 08:29:05.692539   10285 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1129 08:29:05.692554   10285 main.go:143] libmachine: checking permissions on dir: /home
	I1129 08:29:05.692565   10285 main.go:143] libmachine: skipping /home - not owner
	I1129 08:29:05.692571   10285 main.go:143] libmachine: defining domain...
	I1129 08:29:05.693886   10285 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-213983</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/addons-213983.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-213983'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1129 08:29:05.701485   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:7d:25:53 in network default
	I1129 08:29:05.702148   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:05.702169   10285 main.go:143] libmachine: starting domain...
	I1129 08:29:05.702173   10285 main.go:143] libmachine: ensuring networks are active...
	I1129 08:29:05.703213   10285 main.go:143] libmachine: Ensuring network default is active
	I1129 08:29:05.703597   10285 main.go:143] libmachine: Ensuring network mk-addons-213983 is active
	I1129 08:29:05.704252   10285 main.go:143] libmachine: getting domain XML...
	I1129 08:29:05.705309   10285 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-213983</name>
	  <uuid>5dcc2ad0-e106-4ad5-a099-f42601726d5c</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/addons-213983.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:70:67:4e'/>
	      <source network='mk-addons-213983'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:7d:25:53'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1129 08:29:06.990498   10285 main.go:143] libmachine: waiting for domain to start...
	I1129 08:29:06.991755   10285 main.go:143] libmachine: domain is now running
	I1129 08:29:06.991770   10285 main.go:143] libmachine: waiting for IP...
	I1129 08:29:06.992427   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:06.992988   10285 main.go:143] libmachine: no network interface addresses found for domain addons-213983 (source=lease)
	I1129 08:29:06.993013   10285 main.go:143] libmachine: trying to list again with source=arp
	I1129 08:29:06.993269   10285 main.go:143] libmachine: unable to find current IP address of domain addons-213983 in network mk-addons-213983 (interfaces detected: [])
	I1129 08:29:06.993315   10285 retry.go:31] will retry after 268.455074ms: waiting for domain to come up
	I1129 08:29:07.263750   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:07.264282   10285 main.go:143] libmachine: no network interface addresses found for domain addons-213983 (source=lease)
	I1129 08:29:07.264301   10285 main.go:143] libmachine: trying to list again with source=arp
	I1129 08:29:07.264636   10285 main.go:143] libmachine: unable to find current IP address of domain addons-213983 in network mk-addons-213983 (interfaces detected: [])
	I1129 08:29:07.264668   10285 retry.go:31] will retry after 349.254384ms: waiting for domain to come up
	I1129 08:29:07.615170   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:07.615822   10285 main.go:143] libmachine: no network interface addresses found for domain addons-213983 (source=lease)
	I1129 08:29:07.615852   10285 main.go:143] libmachine: trying to list again with source=arp
	I1129 08:29:07.616212   10285 main.go:143] libmachine: unable to find current IP address of domain addons-213983 in network mk-addons-213983 (interfaces detected: [])
	I1129 08:29:07.616246   10285 retry.go:31] will retry after 459.801116ms: waiting for domain to come up
	I1129 08:29:08.077850   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:08.078333   10285 main.go:143] libmachine: no network interface addresses found for domain addons-213983 (source=lease)
	I1129 08:29:08.078348   10285 main.go:143] libmachine: trying to list again with source=arp
	I1129 08:29:08.078668   10285 main.go:143] libmachine: unable to find current IP address of domain addons-213983 in network mk-addons-213983 (interfaces detected: [])
	I1129 08:29:08.078700   10285 retry.go:31] will retry after 375.355878ms: waiting for domain to come up
	I1129 08:29:08.456047   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:08.456527   10285 main.go:143] libmachine: no network interface addresses found for domain addons-213983 (source=lease)
	I1129 08:29:08.456538   10285 main.go:143] libmachine: trying to list again with source=arp
	I1129 08:29:08.456893   10285 main.go:143] libmachine: unable to find current IP address of domain addons-213983 in network mk-addons-213983 (interfaces detected: [])
	I1129 08:29:08.456922   10285 retry.go:31] will retry after 618.8875ms: waiting for domain to come up
	I1129 08:29:09.077770   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:09.078321   10285 main.go:143] libmachine: no network interface addresses found for domain addons-213983 (source=lease)
	I1129 08:29:09.078338   10285 main.go:143] libmachine: trying to list again with source=arp
	I1129 08:29:09.078703   10285 main.go:143] libmachine: unable to find current IP address of domain addons-213983 in network mk-addons-213983 (interfaces detected: [])
	I1129 08:29:09.078734   10285 retry.go:31] will retry after 797.827667ms: waiting for domain to come up
	I1129 08:29:09.877593   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:09.878060   10285 main.go:143] libmachine: no network interface addresses found for domain addons-213983 (source=lease)
	I1129 08:29:09.878076   10285 main.go:143] libmachine: trying to list again with source=arp
	I1129 08:29:09.878363   10285 main.go:143] libmachine: unable to find current IP address of domain addons-213983 in network mk-addons-213983 (interfaces detected: [])
	I1129 08:29:09.878391   10285 retry.go:31] will retry after 845.944805ms: waiting for domain to come up
	I1129 08:29:10.725931   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:10.726453   10285 main.go:143] libmachine: no network interface addresses found for domain addons-213983 (source=lease)
	I1129 08:29:10.726472   10285 main.go:143] libmachine: trying to list again with source=arp
	I1129 08:29:10.726799   10285 main.go:143] libmachine: unable to find current IP address of domain addons-213983 in network mk-addons-213983 (interfaces detected: [])
	I1129 08:29:10.726848   10285 retry.go:31] will retry after 1.075321822s: waiting for domain to come up
	I1129 08:29:11.804087   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:11.804552   10285 main.go:143] libmachine: no network interface addresses found for domain addons-213983 (source=lease)
	I1129 08:29:11.804566   10285 main.go:143] libmachine: trying to list again with source=arp
	I1129 08:29:11.804891   10285 main.go:143] libmachine: unable to find current IP address of domain addons-213983 in network mk-addons-213983 (interfaces detected: [])
	I1129 08:29:11.804921   10285 retry.go:31] will retry after 1.594523451s: waiting for domain to come up
	I1129 08:29:13.401606   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:13.402314   10285 main.go:143] libmachine: no network interface addresses found for domain addons-213983 (source=lease)
	I1129 08:29:13.402336   10285 main.go:143] libmachine: trying to list again with source=arp
	I1129 08:29:13.402785   10285 main.go:143] libmachine: unable to find current IP address of domain addons-213983 in network mk-addons-213983 (interfaces detected: [])
	I1129 08:29:13.402838   10285 retry.go:31] will retry after 2.024175137s: waiting for domain to come up
	I1129 08:29:15.429057   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:15.429610   10285 main.go:143] libmachine: no network interface addresses found for domain addons-213983 (source=lease)
	I1129 08:29:15.429625   10285 main.go:143] libmachine: trying to list again with source=arp
	I1129 08:29:15.430037   10285 main.go:143] libmachine: unable to find current IP address of domain addons-213983 in network mk-addons-213983 (interfaces detected: [])
	I1129 08:29:15.430071   10285 retry.go:31] will retry after 2.881189232s: waiting for domain to come up
	I1129 08:29:18.314782   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:18.315319   10285 main.go:143] libmachine: no network interface addresses found for domain addons-213983 (source=lease)
	I1129 08:29:18.315336   10285 main.go:143] libmachine: trying to list again with source=arp
	I1129 08:29:18.315574   10285 main.go:143] libmachine: unable to find current IP address of domain addons-213983 in network mk-addons-213983 (interfaces detected: [])
	I1129 08:29:18.315616   10285 retry.go:31] will retry after 2.402377569s: waiting for domain to come up
	I1129 08:29:20.719600   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:20.720273   10285 main.go:143] libmachine: domain addons-213983 has current primary IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:20.720290   10285 main.go:143] libmachine: found domain IP: 192.168.39.35
	I1129 08:29:20.720297   10285 main.go:143] libmachine: reserving static IP address...
	I1129 08:29:20.720767   10285 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-213983", mac: "52:54:00:70:67:4e", ip: "192.168.39.35"} in network mk-addons-213983
	I1129 08:29:20.914740   10285 main.go:143] libmachine: reserved static IP address 192.168.39.35 for domain addons-213983
	I1129 08:29:20.914762   10285 main.go:143] libmachine: waiting for SSH...
	I1129 08:29:20.914773   10285 main.go:143] libmachine: Getting to WaitForSSH function...
	I1129 08:29:20.917763   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:20.918221   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:minikube Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:20.918243   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:20.918514   10285 main.go:143] libmachine: Using SSH client type: native
	I1129 08:29:20.918822   10285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1129 08:29:20.918853   10285 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1129 08:29:21.031342   10285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 08:29:21.031774   10285 main.go:143] libmachine: domain creation complete
	I1129 08:29:21.033270   10285 machine.go:94] provisionDockerMachine start ...
	I1129 08:29:21.035823   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.036235   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:21.036259   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.036463   10285 main.go:143] libmachine: Using SSH client type: native
	I1129 08:29:21.036685   10285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1129 08:29:21.036703   10285 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 08:29:21.148626   10285 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1129 08:29:21.148652   10285 buildroot.go:166] provisioning hostname "addons-213983"
	I1129 08:29:21.151667   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.152081   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:21.152107   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.152268   10285 main.go:143] libmachine: Using SSH client type: native
	I1129 08:29:21.152502   10285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1129 08:29:21.152517   10285 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-213983 && echo "addons-213983" | sudo tee /etc/hostname
	I1129 08:29:21.280789   10285 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-213983
	
	I1129 08:29:21.283778   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.284222   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:21.284252   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.284460   10285 main.go:143] libmachine: Using SSH client type: native
	I1129 08:29:21.284717   10285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1129 08:29:21.284733   10285 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-213983' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-213983/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-213983' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 08:29:21.407327   10285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 08:29:21.407360   10285 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5651/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5651/.minikube}
	I1129 08:29:21.407385   10285 buildroot.go:174] setting up certificates
	I1129 08:29:21.407395   10285 provision.go:84] configureAuth start
	I1129 08:29:21.410473   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.410941   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:21.410967   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.413556   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.413928   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:21.413955   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.414096   10285 provision.go:143] copyHostCerts
	I1129 08:29:21.414172   10285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5651/.minikube/ca.pem (1082 bytes)
	I1129 08:29:21.414309   10285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5651/.minikube/cert.pem (1123 bytes)
	I1129 08:29:21.414375   10285 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5651/.minikube/key.pem (1679 bytes)
	I1129 08:29:21.414420   10285 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5651/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca-key.pem org=jenkins.addons-213983 san=[127.0.0.1 192.168.39.35 addons-213983 localhost minikube]
	I1129 08:29:21.456613   10285 provision.go:177] copyRemoteCerts
	I1129 08:29:21.456666   10285 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 08:29:21.459384   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.459796   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:21.459820   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.459991   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:21.564402   10285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 08:29:21.592896   10285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1129 08:29:21.620510   10285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 08:29:21.649481   10285 provision.go:87] duration metric: took 242.071942ms to configureAuth
	I1129 08:29:21.649512   10285 buildroot.go:189] setting minikube options for container-runtime
	I1129 08:29:21.649677   10285 config.go:182] Loaded profile config "addons-213983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:29:21.652622   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.653002   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:21.653024   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.653187   10285 main.go:143] libmachine: Using SSH client type: native
	I1129 08:29:21.653370   10285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1129 08:29:21.653388   10285 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 08:29:21.889650   10285 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 08:29:21.889675   10285 machine.go:97] duration metric: took 856.389368ms to provisionDockerMachine
	I1129 08:29:21.889685   10285 client.go:176] duration metric: took 16.894114542s to LocalClient.Create
	I1129 08:29:21.889695   10285 start.go:167] duration metric: took 16.894172851s to libmachine.API.Create "addons-213983"
	I1129 08:29:21.889701   10285 start.go:293] postStartSetup for "addons-213983" (driver="kvm2")
	I1129 08:29:21.889709   10285 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 08:29:21.889766   10285 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 08:29:21.892861   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.893258   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:21.893294   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.893456   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:21.979733   10285 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 08:29:21.984678   10285 info.go:137] Remote host: Buildroot 2025.02
	I1129 08:29:21.984708   10285 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5651/.minikube/addons for local assets ...
	I1129 08:29:21.984789   10285 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5651/.minikube/files for local assets ...
	I1129 08:29:21.984852   10285 start.go:296] duration metric: took 95.143924ms for postStartSetup
	I1129 08:29:21.987713   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.988082   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:21.988101   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.988307   10285 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/config.json ...
	I1129 08:29:21.988477   10285 start.go:128] duration metric: took 16.994671627s to createHost
	I1129 08:29:21.990603   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.991049   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:21.991070   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:21.991243   10285 main.go:143] libmachine: Using SSH client type: native
	I1129 08:29:21.991437   10285 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I1129 08:29:21.991447   10285 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1129 08:29:22.106578   10285 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764404962.065116792
	
	I1129 08:29:22.106607   10285 fix.go:216] guest clock: 1764404962.065116792
	I1129 08:29:22.106614   10285 fix.go:229] Guest: 2025-11-29 08:29:22.065116792 +0000 UTC Remote: 2025-11-29 08:29:21.988487585 +0000 UTC m=+17.093571731 (delta=76.629207ms)
	I1129 08:29:22.106630   10285 fix.go:200] guest clock delta is within tolerance: 76.629207ms
	I1129 08:29:22.106635   10285 start.go:83] releasing machines lock for "addons-213983", held for 17.112925661s
	I1129 08:29:22.109188   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:22.109588   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:22.109610   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:22.110151   10285 ssh_runner.go:195] Run: cat /version.json
	I1129 08:29:22.110242   10285 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 08:29:22.113320   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:22.113644   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:22.113708   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:22.113732   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:22.113918   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:22.114117   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:22.114148   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:22.114335   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:22.196369   10285 ssh_runner.go:195] Run: systemctl --version
	I1129 08:29:22.232263   10285 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 08:29:22.389092   10285 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 08:29:22.395481   10285 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 08:29:22.395554   10285 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 08:29:22.415590   10285 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 08:29:22.415610   10285 start.go:496] detecting cgroup driver to use...
	I1129 08:29:22.415664   10285 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 08:29:22.434404   10285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 08:29:22.450971   10285 docker.go:218] disabling cri-docker service (if available) ...
	I1129 08:29:22.451042   10285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 08:29:22.468967   10285 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 08:29:22.484925   10285 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 08:29:22.626580   10285 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 08:29:22.834112   10285 docker.go:234] disabling docker service ...
	I1129 08:29:22.834189   10285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 08:29:22.851189   10285 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 08:29:22.866056   10285 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 08:29:23.016134   10285 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 08:29:23.147312   10285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 08:29:23.162822   10285 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 08:29:23.184881   10285 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 08:29:23.184956   10285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:29:23.197343   10285 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 08:29:23.197406   10285 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:29:23.210109   10285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:29:23.222662   10285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:29:23.235638   10285 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 08:29:23.249104   10285 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:29:23.261593   10285 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:29:23.281996   10285 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 08:29:23.293887   10285 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 08:29:23.304041   10285 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1129 08:29:23.304111   10285 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1129 08:29:23.323711   10285 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 08:29:23.335606   10285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 08:29:23.474847   10285 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 08:29:23.587553   10285 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 08:29:23.587662   10285 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 08:29:23.592803   10285 start.go:564] Will wait 60s for crictl version
	I1129 08:29:23.592888   10285 ssh_runner.go:195] Run: which crictl
	I1129 08:29:23.596962   10285 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1129 08:29:23.633666   10285 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1129 08:29:23.633805   10285 ssh_runner.go:195] Run: crio --version
	I1129 08:29:23.664186   10285 ssh_runner.go:195] Run: crio --version
	I1129 08:29:23.695818   10285 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1129 08:29:23.699703   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:23.700220   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:23.700252   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:23.700616   10285 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1129 08:29:23.705373   10285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 08:29:23.720515   10285 kubeadm.go:884] updating cluster {Name:addons-213983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-213983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 08:29:23.720665   10285 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 08:29:23.720711   10285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 08:29:23.751939   10285 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1129 08:29:23.752015   10285 ssh_runner.go:195] Run: which lz4
	I1129 08:29:23.756365   10285 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1129 08:29:23.761024   10285 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1129 08:29:23.761055   10285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1129 08:29:25.148583   10285 crio.go:462] duration metric: took 1.392254897s to copy over tarball
	I1129 08:29:25.148651   10285 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1129 08:29:26.785670   10285 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.636986095s)
	I1129 08:29:26.785708   10285 crio.go:469] duration metric: took 1.637095359s to extract the tarball
	I1129 08:29:26.785718   10285 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1129 08:29:26.826410   10285 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 08:29:26.864204   10285 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 08:29:26.864225   10285 cache_images.go:86] Images are preloaded, skipping loading
	I1129 08:29:26.864233   10285 kubeadm.go:935] updating node { 192.168.39.35 8443 v1.34.1 crio true true} ...
	I1129 08:29:26.864306   10285 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-213983 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.35
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-213983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 08:29:26.864368   10285 ssh_runner.go:195] Run: crio config
	I1129 08:29:26.911182   10285 cni.go:84] Creating CNI manager for ""
	I1129 08:29:26.911207   10285 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1129 08:29:26.911222   10285 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 08:29:26.911243   10285 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.35 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-213983 NodeName:addons-213983 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.35"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.35 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 08:29:26.911361   10285 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.35
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-213983"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.35"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.35"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 08:29:26.911429   10285 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 08:29:26.924303   10285 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 08:29:26.924377   10285 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 08:29:26.936667   10285 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1129 08:29:26.957773   10285 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 08:29:26.978448   10285 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1129 08:29:26.999006   10285 ssh_runner.go:195] Run: grep 192.168.39.35	control-plane.minikube.internal$ /etc/hosts
	I1129 08:29:27.003253   10285 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.35	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 08:29:27.018315   10285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 08:29:27.159446   10285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 08:29:27.198201   10285 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983 for IP: 192.168.39.35
	I1129 08:29:27.198232   10285 certs.go:195] generating shared ca certs ...
	I1129 08:29:27.198256   10285 certs.go:227] acquiring lock for ca certs: {Name:mk263acc791d5a2c77504c81548ce554781ff9eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:27.198427   10285 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5651/.minikube/ca.key
	I1129 08:29:27.247188   10285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt ...
	I1129 08:29:27.247215   10285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt: {Name:mk552e6105a9ca0b96f5e1023122f3c4ad1847b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:27.247369   10285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5651/.minikube/ca.key ...
	I1129 08:29:27.247381   10285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/ca.key: {Name:mk6b380c171b8829a1009a78de05f80d4b966e70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:27.247467   10285 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.key
	I1129 08:29:27.339982   10285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.crt ...
	I1129 08:29:27.340010   10285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.crt: {Name:mk63c12be1e451297146b6eb79fdafc4ce114dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:27.340173   10285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.key ...
	I1129 08:29:27.340186   10285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.key: {Name:mke1599236d8b2a6aa74e227a9f9075a07eecc61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:27.340260   10285 certs.go:257] generating profile certs ...
	I1129 08:29:27.340312   10285 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.key
	I1129 08:29:27.340325   10285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt with IP's: []
	I1129 08:29:27.433916   10285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt ...
	I1129 08:29:27.433944   10285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: {Name:mk36e513ea879d648e4d0e378fd7899ebb274168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:27.434113   10285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.key ...
	I1129 08:29:27.434123   10285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.key: {Name:mk6a45e42a3e8bff076f5d1bb546ef273de51284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:27.434191   10285 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/apiserver.key.70e97532
	I1129 08:29:27.434210   10285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/apiserver.crt.70e97532 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.35]
	I1129 08:29:27.493925   10285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/apiserver.crt.70e97532 ...
	I1129 08:29:27.493952   10285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/apiserver.crt.70e97532: {Name:mk3376feaf6f33ede1fed30d8e619a074cf4f5cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:27.494098   10285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/apiserver.key.70e97532 ...
	I1129 08:29:27.494110   10285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/apiserver.key.70e97532: {Name:mk17347ef18b6b7d9bb022440988bee68c9cd104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:27.494184   10285 certs.go:382] copying /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/apiserver.crt.70e97532 -> /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/apiserver.crt
	I1129 08:29:27.494254   10285 certs.go:386] copying /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/apiserver.key.70e97532 -> /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/apiserver.key
	I1129 08:29:27.494307   10285 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/proxy-client.key
	I1129 08:29:27.494324   10285 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/proxy-client.crt with IP's: []
	I1129 08:29:27.581852   10285 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/proxy-client.crt ...
	I1129 08:29:27.581879   10285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/proxy-client.crt: {Name:mk1f1bb87a8e26238ddb37c5ca4d6aa876640959 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:27.582029   10285 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/proxy-client.key ...
	I1129 08:29:27.582040   10285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/proxy-client.key: {Name:mkb00a18bacb33d79478898b1eb9403d7c24996a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:27.582214   10285 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 08:29:27.582249   10285 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem (1082 bytes)
	I1129 08:29:27.582273   10285 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem (1123 bytes)
	I1129 08:29:27.582298   10285 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/key.pem (1679 bytes)
	I1129 08:29:27.582903   10285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 08:29:27.612577   10285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 08:29:27.640188   10285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 08:29:27.669326   10285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 08:29:27.697982   10285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1129 08:29:27.728976   10285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1129 08:29:27.759008   10285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 08:29:27.786727   10285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 08:29:27.813890   10285 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 08:29:27.841569   10285 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 08:29:27.860883   10285 ssh_runner.go:195] Run: openssl version
	I1129 08:29:27.867179   10285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 08:29:27.883761   10285 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 08:29:27.889035   10285 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 08:29:27.889081   10285 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 08:29:27.896172   10285 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 08:29:27.908895   10285 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 08:29:27.914750   10285 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 08:29:27.914800   10285 kubeadm.go:401] StartCluster: {Name:addons-213983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-213983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:29:27.914896   10285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 08:29:27.914938   10285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 08:29:27.951976   10285 cri.go:89] found id: ""
	I1129 08:29:27.952042   10285 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 08:29:27.964377   10285 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 08:29:27.976071   10285 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 08:29:27.987968   10285 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 08:29:27.987994   10285 kubeadm.go:158] found existing configuration files:
	
	I1129 08:29:27.988050   10285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 08:29:27.998644   10285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 08:29:27.998710   10285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 08:29:28.010182   10285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 08:29:28.020778   10285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 08:29:28.020842   10285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 08:29:28.032188   10285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 08:29:28.042721   10285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 08:29:28.042814   10285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 08:29:28.054180   10285 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 08:29:28.065140   10285 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 08:29:28.065210   10285 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 08:29:28.077047   10285 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1129 08:29:28.217329   10285 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1129 08:29:40.045287   10285 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 08:29:40.045350   10285 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 08:29:40.045412   10285 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 08:29:40.045501   10285 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 08:29:40.045575   10285 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 08:29:40.045656   10285 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 08:29:40.047325   10285 out.go:252]   - Generating certificates and keys ...
	I1129 08:29:40.047420   10285 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 08:29:40.047473   10285 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 08:29:40.047555   10285 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1129 08:29:40.047662   10285 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 08:29:40.047756   10285 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 08:29:40.047852   10285 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 08:29:40.047941   10285 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 08:29:40.048043   10285 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-213983 localhost] and IPs [192.168.39.35 127.0.0.1 ::1]
	I1129 08:29:40.048094   10285 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 08:29:40.048196   10285 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-213983 localhost] and IPs [192.168.39.35 127.0.0.1 ::1]
	I1129 08:29:40.048250   10285 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 08:29:40.048314   10285 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 08:29:40.048370   10285 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 08:29:40.048416   10285 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 08:29:40.048460   10285 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 08:29:40.048515   10285 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 08:29:40.048558   10285 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 08:29:40.048608   10285 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 08:29:40.048652   10285 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 08:29:40.048733   10285 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 08:29:40.048794   10285 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 08:29:40.050448   10285 out.go:252]   - Booting up control plane ...
	I1129 08:29:40.050548   10285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 08:29:40.050622   10285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 08:29:40.050684   10285 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 08:29:40.050801   10285 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 08:29:40.050915   10285 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 08:29:40.051012   10285 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 08:29:40.051119   10285 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 08:29:40.051179   10285 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 08:29:40.051299   10285 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 08:29:40.051395   10285 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1129 08:29:40.051448   10285 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.805596ms
	I1129 08:29:40.051529   10285 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1129 08:29:40.051599   10285 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.35:8443/livez
	I1129 08:29:40.051674   10285 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1129 08:29:40.051768   10285 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1129 08:29:40.051872   10285 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.666811506s
	I1129 08:29:40.051936   10285 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.948679312s
	I1129 08:29:40.051998   10285 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001971077s
	I1129 08:29:40.052087   10285 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1129 08:29:40.052191   10285 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1129 08:29:40.052243   10285 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1129 08:29:40.052400   10285 kubeadm.go:319] [mark-control-plane] Marking the node addons-213983 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1129 08:29:40.052470   10285 kubeadm.go:319] [bootstrap-token] Using token: 5610el.4dp9hzwtytji9smf
	I1129 08:29:40.053927   10285 out.go:252]   - Configuring RBAC rules ...
	I1129 08:29:40.054026   10285 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1129 08:29:40.054104   10285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1129 08:29:40.054224   10285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1129 08:29:40.054406   10285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1129 08:29:40.054557   10285 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1129 08:29:40.054634   10285 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1129 08:29:40.054726   10285 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1129 08:29:40.054775   10285 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1129 08:29:40.054823   10285 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1129 08:29:40.054846   10285 kubeadm.go:319] 
	I1129 08:29:40.054892   10285 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1129 08:29:40.054898   10285 kubeadm.go:319] 
	I1129 08:29:40.054971   10285 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1129 08:29:40.054997   10285 kubeadm.go:319] 
	I1129 08:29:40.055020   10285 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1129 08:29:40.055069   10285 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1129 08:29:40.055113   10285 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1129 08:29:40.055119   10285 kubeadm.go:319] 
	I1129 08:29:40.055161   10285 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1129 08:29:40.055166   10285 kubeadm.go:319] 
	I1129 08:29:40.055202   10285 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1129 08:29:40.055208   10285 kubeadm.go:319] 
	I1129 08:29:40.055253   10285 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1129 08:29:40.055312   10285 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1129 08:29:40.055368   10285 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1129 08:29:40.055371   10285 kubeadm.go:319] 
	I1129 08:29:40.055439   10285 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1129 08:29:40.055499   10285 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1129 08:29:40.055504   10285 kubeadm.go:319] 
	I1129 08:29:40.055573   10285 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5610el.4dp9hzwtytji9smf \
	I1129 08:29:40.055661   10285 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:62717defac07e0525174343341558099e36dc1f2fd9d8e8ecd10c36657166c94 \
	I1129 08:29:40.055679   10285 kubeadm.go:319] 	--control-plane 
	I1129 08:29:40.055683   10285 kubeadm.go:319] 
	I1129 08:29:40.055801   10285 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1129 08:29:40.055818   10285 kubeadm.go:319] 
	I1129 08:29:40.055899   10285 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5610el.4dp9hzwtytji9smf \
	I1129 08:29:40.056000   10285 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:62717defac07e0525174343341558099e36dc1f2fd9d8e8ecd10c36657166c94 
	I1129 08:29:40.056010   10285 cni.go:84] Creating CNI manager for ""
	I1129 08:29:40.056017   10285 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1129 08:29:40.057461   10285 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1129 08:29:40.058563   10285 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1129 08:29:40.073779   10285 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1129 08:29:40.096063   10285 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 08:29:40.096148   10285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:40.096188   10285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-213983 minikube.k8s.io/updated_at=2025_11_29T08_29_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af minikube.k8s.io/name=addons-213983 minikube.k8s.io/primary=true
	I1129 08:29:40.271812   10285 ops.go:34] apiserver oom_adj: -16
	I1129 08:29:40.271941   10285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:40.772400   10285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:41.272313   10285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:41.772683   10285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:42.272350   10285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:42.772927   10285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:43.272564   10285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:43.772073   10285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:44.272605   10285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:44.773055   10285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:45.273012   10285 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1129 08:29:45.370535   10285 kubeadm.go:1114] duration metric: took 5.274436677s to wait for elevateKubeSystemPrivileges
	I1129 08:29:45.370574   10285 kubeadm.go:403] duration metric: took 17.45577816s to StartCluster
	I1129 08:29:45.370599   10285 settings.go:142] acquiring lock: {Name:mkb0bfd7d63d07772bc8411985c986880254a5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:45.370763   10285 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5651/kubeconfig
	I1129 08:29:45.371208   10285 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/kubeconfig: {Name:mk06369260b11b7542906282ff812e026bce8478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 08:29:45.371441   10285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1129 08:29:45.371483   10285 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 08:29:45.371549   10285 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1129 08:29:45.371681   10285 addons.go:70] Setting yakd=true in profile "addons-213983"
	I1129 08:29:45.371706   10285 addons.go:239] Setting addon yakd=true in "addons-213983"
	I1129 08:29:45.371714   10285 config.go:182] Loaded profile config "addons-213983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:29:45.371736   10285 host.go:66] Checking if "addons-213983" exists ...
	I1129 08:29:45.371738   10285 addons.go:70] Setting ingress=true in profile "addons-213983"
	I1129 08:29:45.371754   10285 addons.go:239] Setting addon ingress=true in "addons-213983"
	I1129 08:29:45.371769   10285 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-213983"
	I1129 08:29:45.371711   10285 addons.go:70] Setting inspektor-gadget=true in profile "addons-213983"
	I1129 08:29:45.371787   10285 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-213983"
	I1129 08:29:45.371792   10285 addons.go:239] Setting addon inspektor-gadget=true in "addons-213983"
	I1129 08:29:45.371798   10285 host.go:66] Checking if "addons-213983" exists ...
	I1129 08:29:45.371795   10285 addons.go:70] Setting cloud-spanner=true in profile "addons-213983"
	I1129 08:29:45.371814   10285 addons.go:239] Setting addon cloud-spanner=true in "addons-213983"
	I1129 08:29:45.371816   10285 host.go:66] Checking if "addons-213983" exists ...
	I1129 08:29:45.371824   10285 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-213983"
	I1129 08:29:45.371849   10285 host.go:66] Checking if "addons-213983" exists ...
	I1129 08:29:45.371870   10285 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-213983"
	I1129 08:29:45.371886   10285 host.go:66] Checking if "addons-213983" exists ...
	I1129 08:29:45.372016   10285 addons.go:70] Setting registry-creds=true in profile "addons-213983"
	I1129 08:29:45.372043   10285 addons.go:239] Setting addon registry-creds=true in "addons-213983"
	I1129 08:29:45.372071   10285 host.go:66] Checking if "addons-213983" exists ...
	I1129 08:29:45.372086   10285 addons.go:70] Setting ingress-dns=true in profile "addons-213983"
	I1129 08:29:45.372113   10285 addons.go:239] Setting addon ingress-dns=true in "addons-213983"
	I1129 08:29:45.372195   10285 host.go:66] Checking if "addons-213983" exists ...
	I1129 08:29:45.372842   10285 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-213983"
	I1129 08:29:45.372865   10285 addons.go:70] Setting volcano=true in profile "addons-213983"
	I1129 08:29:45.372848   10285 addons.go:70] Setting storage-provisioner=true in profile "addons-213983"
	I1129 08:29:45.372881   10285 addons.go:239] Setting addon volcano=true in "addons-213983"
	I1129 08:29:45.372887   10285 addons.go:239] Setting addon storage-provisioner=true in "addons-213983"
	I1129 08:29:45.372906   10285 host.go:66] Checking if "addons-213983" exists ...
	I1129 08:29:45.372911   10285 host.go:66] Checking if "addons-213983" exists ...
	I1129 08:29:45.371817   10285 host.go:66] Checking if "addons-213983" exists ...
	I1129 08:29:45.371730   10285 addons.go:70] Setting gcp-auth=true in profile "addons-213983"
	I1129 08:29:45.373146   10285 addons.go:70] Setting metrics-server=true in profile "addons-213983"
	I1129 08:29:45.373161   10285 mustload.go:66] Loading cluster: addons-213983
	I1129 08:29:45.373163   10285 addons.go:239] Setting addon metrics-server=true in "addons-213983"
	I1129 08:29:45.373198   10285 host.go:66] Checking if "addons-213983" exists ...
	I1129 08:29:45.373322   10285 config.go:182] Loaded profile config "addons-213983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:29:45.373402   10285 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-213983"
	I1129 08:29:45.373418   10285 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-213983"
	I1129 08:29:45.373436   10285 host.go:66] Checking if "addons-213983" exists ...
	I1129 08:29:45.371747   10285 addons.go:70] Setting default-storageclass=true in profile "addons-213983"
	I1129 08:29:45.373884   10285 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-213983"
	I1129 08:29:45.373919   10285 addons.go:70] Setting volumesnapshots=true in profile "addons-213983"
	I1129 08:29:45.373947   10285 addons.go:239] Setting addon volumesnapshots=true in "addons-213983"
	I1129 08:29:45.373973   10285 host.go:66] Checking if "addons-213983" exists ...
	I1129 08:29:45.374111   10285 out.go:179] * Verifying Kubernetes components...
	I1129 08:29:45.374203   10285 addons.go:70] Setting registry=true in profile "addons-213983"
	I1129 08:29:45.374233   10285 addons.go:239] Setting addon registry=true in "addons-213983"
	I1129 08:29:45.374255   10285 host.go:66] Checking if "addons-213983" exists ...
	I1129 08:29:45.372868   10285 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-213983"
	I1129 08:29:45.375534   10285 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 08:29:45.380110   10285 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1129 08:29:45.380143   10285 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1129 08:29:45.380164   10285 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1129 08:29:45.380197   10285 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1129 08:29:45.380229   10285 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1129 08:29:45.380233   10285 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1129 08:29:45.380244   10285 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 08:29:45.380259   10285 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1129 08:29:45.380286   10285 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1129 08:29:45.380567   10285 host.go:66] Checking if "addons-213983" exists ...
	W1129 08:29:45.381308   10285 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1129 08:29:45.382040   10285 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1129 08:29:45.382091   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1129 08:29:45.382870   10285 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1129 08:29:45.382899   10285 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1129 08:29:45.382911   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1129 08:29:45.383298   10285 addons.go:239] Setting addon default-storageclass=true in "addons-213983"
	I1129 08:29:45.383337   10285 host.go:66] Checking if "addons-213983" exists ...
	I1129 08:29:45.383302   10285 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-213983"
	I1129 08:29:45.383378   10285 host.go:66] Checking if "addons-213983" exists ...
	I1129 08:29:45.382965   10285 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1129 08:29:45.383607   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1129 08:29:45.383673   10285 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1129 08:29:45.383692   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1129 08:29:45.383012   10285 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1129 08:29:45.383757   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1129 08:29:45.383880   10285 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1129 08:29:45.383897   10285 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1129 08:29:45.383021   10285 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 08:29:45.383971   10285 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1129 08:29:45.383971   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 08:29:45.383039   10285 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1129 08:29:45.384406   10285 out.go:179]   - Using image docker.io/registry:3.0.0
	I1129 08:29:45.384413   10285 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1129 08:29:45.384415   10285 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1129 08:29:45.384426   10285 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1129 08:29:45.385440   10285 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1129 08:29:45.385455   10285 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1129 08:29:45.385497   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1129 08:29:45.385513   10285 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1129 08:29:45.385529   10285 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1129 08:29:45.386697   10285 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1129 08:29:45.386762   10285 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1129 08:29:45.388102   10285 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1129 08:29:45.388118   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1129 08:29:45.388279   10285 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1129 08:29:45.388527   10285 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 08:29:45.388542   10285 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 08:29:45.389410   10285 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1129 08:29:45.389541   10285 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1129 08:29:45.389558   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1129 08:29:45.391441   10285 out.go:179]   - Using image docker.io/busybox:stable
	I1129 08:29:45.391540   10285 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1129 08:29:45.391957   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.394015   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:45.394053   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.394084   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.394733   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:45.394755   10285 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1129 08:29:45.394744   10285 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1129 08:29:45.395022   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.395723   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.395907   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:45.395941   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.396088   10285 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1129 08:29:45.396100   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1129 08:29:45.396707   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:45.396737   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:45.396761   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.397020   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.397393   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.397703   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:45.397729   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.397814   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:45.397876   10285 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1129 08:29:45.398192   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.398521   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:45.398651   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.398958   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:45.399034   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:45.399073   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.399130   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.399371   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.399434   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:45.399463   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.399573   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:45.399716   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:45.399776   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.400024   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:45.400056   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.400130   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:45.400525   10285 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1129 08:29:45.400544   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:45.400906   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:45.400936   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.401151   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:45.401160   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.401178   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.401271   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:45.401497   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.401747   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:45.401931   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.401957   10285 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1129 08:29:45.401968   10285 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1129 08:29:45.402233   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:45.402347   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.402428   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:45.402460   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.402777   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:45.402788   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:45.402950   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.402968   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:45.403190   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:45.404026   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.404410   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:45.404441   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.404613   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:45.405370   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.405765   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:45.405786   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:45.405971   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	W1129 08:29:45.546821   10285 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40344->192.168.39.35:22: read: connection reset by peer
	I1129 08:29:45.546891   10285 retry.go:31] will retry after 348.449266ms: ssh: handshake failed: read tcp 192.168.39.1:40344->192.168.39.35:22: read: connection reset by peer
	I1129 08:29:46.228667   10285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 08:29:46.254916   10285 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 08:29:46.254962   10285 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1129 08:29:46.283373   10285 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1129 08:29:46.283403   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1129 08:29:46.314726   10285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1129 08:29:46.363308   10285 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1129 08:29:46.363328   10285 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1129 08:29:46.385740   10285 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1129 08:29:46.385768   10285 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1129 08:29:46.415304   10285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1129 08:29:46.416302   10285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 08:29:46.431443   10285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1129 08:29:46.434015   10285 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1129 08:29:46.434036   10285 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1129 08:29:46.435445   10285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1129 08:29:46.439535   10285 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1129 08:29:46.439554   10285 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1129 08:29:46.487328   10285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1129 08:29:46.507776   10285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1129 08:29:46.567257   10285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1129 08:29:46.752065   10285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1129 08:29:46.813613   10285 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1129 08:29:46.813640   10285 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1129 08:29:46.864691   10285 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1129 08:29:46.864713   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1129 08:29:46.891366   10285 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1129 08:29:46.891389   10285 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1129 08:29:46.912796   10285 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1129 08:29:46.912842   10285 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1129 08:29:46.960078   10285 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1129 08:29:46.960103   10285 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1129 08:29:47.434534   10285 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1129 08:29:47.434562   10285 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1129 08:29:47.443303   10285 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 08:29:47.443329   10285 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1129 08:29:47.479644   10285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1129 08:29:47.491535   10285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1129 08:29:47.528909   10285 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1129 08:29:47.528947   10285 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1129 08:29:47.583351   10285 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1129 08:29:47.583382   10285 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1129 08:29:47.842335   10285 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1129 08:29:47.842375   10285 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1129 08:29:47.892842   10285 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1129 08:29:47.892870   10285 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1129 08:29:47.953014   10285 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1129 08:29:47.953044   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1129 08:29:48.094891   10285 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1129 08:29:48.094918   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1129 08:29:48.192982   10285 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1129 08:29:48.193011   10285 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1129 08:29:48.259998   10285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.031294122s)
	I1129 08:29:48.260110   10285 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.005154939s)
	I1129 08:29:48.260797   10285 node_ready.go:35] waiting up to 6m0s for node "addons-213983" to be "Ready" ...
	I1129 08:29:48.271449   10285 node_ready.go:49] node "addons-213983" is "Ready"
	I1129 08:29:48.271480   10285 node_ready.go:38] duration metric: took 10.662932ms for node "addons-213983" to be "Ready" ...
	I1129 08:29:48.271496   10285 api_server.go:52] waiting for apiserver process to appear ...
	I1129 08:29:48.271574   10285 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 08:29:48.318202   10285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1129 08:29:48.604264   10285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1129 08:29:48.604989   10285 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1129 08:29:48.605012   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1129 08:29:49.005531   10285 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1129 08:29:49.005555   10285 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1129 08:29:49.283859   10285 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1129 08:29:49.283881   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1129 08:29:49.428330   10285 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.173334624s)
	I1129 08:29:49.428365   10285 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1129 08:29:49.687053   10285 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1129 08:29:49.687077   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1129 08:29:49.973961   10285 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1129 08:29:49.973996   10285 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1129 08:29:49.980912   10285 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-213983" context rescaled to 1 replicas
	I1129 08:29:50.246341   10285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1129 08:29:52.586200   10285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (6.271435705s)
	I1129 08:29:52.586333   10285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.169999603s)
	I1129 08:29:52.586365   10285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.171032482s)
	I1129 08:29:52.586434   10285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (6.154963794s)
	I1129 08:29:52.586485   10285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.150994121s)
	I1129 08:29:52.586559   10285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.099196757s)
	I1129 08:29:52.586588   10285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.078782264s)
	I1129 08:29:52.817799   10285 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1129 08:29:52.820764   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:52.821233   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:52.821263   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:52.821433   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:53.114911   10285 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1129 08:29:53.341556   10285 addons.go:239] Setting addon gcp-auth=true in "addons-213983"
	I1129 08:29:53.341622   10285 host.go:66] Checking if "addons-213983" exists ...
	I1129 08:29:53.343755   10285 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1129 08:29:53.346554   10285 main.go:143] libmachine: domain addons-213983 has defined MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:53.347066   10285 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:67:4e", ip: ""} in network mk-addons-213983: {Iface:virbr1 ExpiryTime:2025-11-29 09:29:20 +0000 UTC Type:0 Mac:52:54:00:70:67:4e Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-213983 Clientid:01:52:54:00:70:67:4e}
	I1129 08:29:53.347101   10285 main.go:143] libmachine: domain addons-213983 has defined IP address 192.168.39.35 and MAC address 52:54:00:70:67:4e in network mk-addons-213983
	I1129 08:29:53.347311   10285 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/addons-213983/id_rsa Username:docker}
	I1129 08:29:54.359431   10285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.792129002s)
	I1129 08:29:54.359479   10285 addons.go:495] Verifying addon ingress=true in "addons-213983"
	I1129 08:29:54.359556   10285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.607458153s)
	I1129 08:29:54.359631   10285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.879950504s)
	I1129 08:29:54.359656   10285 addons.go:495] Verifying addon registry=true in "addons-213983"
	I1129 08:29:54.359753   10285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.86817942s)
	I1129 08:29:54.359774   10285 addons.go:495] Verifying addon metrics-server=true in "addons-213983"
	I1129 08:29:54.359809   10285 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.088213725s)
	I1129 08:29:54.359849   10285 api_server.go:72] duration metric: took 8.988338076s to wait for apiserver process to appear ...
	I1129 08:29:54.359861   10285 api_server.go:88] waiting for apiserver healthz status ...
	I1129 08:29:54.359884   10285 api_server.go:253] Checking apiserver healthz at https://192.168.39.35:8443/healthz ...
	I1129 08:29:54.359910   10285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.041668903s)
	I1129 08:29:54.361416   10285 out.go:179] * Verifying ingress addon...
	I1129 08:29:54.362122   10285 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-213983 service yakd-dashboard -n yakd-dashboard
	
	I1129 08:29:54.362134   10285 out.go:179] * Verifying registry addon...
	I1129 08:29:54.363813   10285 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1129 08:29:54.364385   10285 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1129 08:29:54.389871   10285 api_server.go:279] https://192.168.39.35:8443/healthz returned 200:
	ok
	I1129 08:29:54.395912   10285 api_server.go:141] control plane version: v1.34.1
	I1129 08:29:54.395940   10285 api_server.go:131] duration metric: took 36.071428ms to wait for apiserver health ...
	I1129 08:29:54.395949   10285 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 08:29:54.406958   10285 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1129 08:29:54.406986   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:54.407260   10285 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1129 08:29:54.407283   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:54.442597   10285 system_pods.go:59] 15 kube-system pods found
	I1129 08:29:54.442632   10285 system_pods.go:61] "amd-gpu-device-plugin-q6ggk" [e721dc0e-2e23-4bcd-a8fa-d84566326095] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1129 08:29:54.442640   10285 system_pods.go:61] "coredns-66bc5c9577-dzxvz" [ee201c6f-253b-4bf1-8a2a-356bf9f63f0a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 08:29:54.442648   10285 system_pods.go:61] "coredns-66bc5c9577-rd24h" [19a814d4-a17e-46b0-b4f6-a28f17377608] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 08:29:54.442652   10285 system_pods.go:61] "etcd-addons-213983" [5a688164-819a-45a6-9b96-152d7ee40517] Running
	I1129 08:29:54.442656   10285 system_pods.go:61] "kube-apiserver-addons-213983" [f8439e5b-c31b-4246-96cb-aa9d7f97d7ef] Running
	I1129 08:29:54.442659   10285 system_pods.go:61] "kube-controller-manager-addons-213983" [339e4d72-2288-4096-ad89-745879383d51] Running
	I1129 08:29:54.442665   10285 system_pods.go:61] "kube-ingress-dns-minikube" [0a82c192-8ed7-43b0-a6df-5452ef3d0494] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 08:29:54.442668   10285 system_pods.go:61] "kube-proxy-m7v4z" [73957cad-6d0d-405f-aeff-777f57eb12f5] Running
	I1129 08:29:54.442671   10285 system_pods.go:61] "kube-scheduler-addons-213983" [6bfea183-1d9b-4cc0-bad3-2c7e062680e6] Running
	I1129 08:29:54.442675   10285 system_pods.go:61] "metrics-server-85b7d694d7-frgcn" [f71b082d-7406-491a-9d31-dd48f8c0106e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 08:29:54.442681   10285 system_pods.go:61] "nvidia-device-plugin-daemonset-c9l66" [5e8b5d05-ea15-45b9-8a44-7d40d4d34c68] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 08:29:54.442687   10285 system_pods.go:61] "registry-6b586f9694-pw672" [464566ae-151b-4294-8a2a-b34e5c6562ec] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 08:29:54.442692   10285 system_pods.go:61] "registry-creds-764b6fb674-k52zl" [8ca120a2-be4b-423e-ab6b-09f9336b7bb6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 08:29:54.442696   10285 system_pods.go:61] "registry-proxy-7cbkh" [73ffdfdf-bd95-4081-8154-0ffcb209c237] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 08:29:54.442702   10285 system_pods.go:61] "storage-provisioner" [ac55ea6b-02ab-4229-baef-8d112e0991a8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 08:29:54.442709   10285 system_pods.go:74] duration metric: took 46.75424ms to wait for pod list to return data ...
	I1129 08:29:54.442716   10285 default_sa.go:34] waiting for default service account to be created ...
	I1129 08:29:54.464656   10285 default_sa.go:45] found service account: "default"
	I1129 08:29:54.464677   10285 default_sa.go:55] duration metric: took 21.955695ms for default service account to be created ...
	I1129 08:29:54.464687   10285 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 08:29:54.505803   10285 system_pods.go:86] 15 kube-system pods found
	I1129 08:29:54.505865   10285 system_pods.go:89] "amd-gpu-device-plugin-q6ggk" [e721dc0e-2e23-4bcd-a8fa-d84566326095] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1129 08:29:54.505873   10285 system_pods.go:89] "coredns-66bc5c9577-dzxvz" [ee201c6f-253b-4bf1-8a2a-356bf9f63f0a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 08:29:54.505881   10285 system_pods.go:89] "coredns-66bc5c9577-rd24h" [19a814d4-a17e-46b0-b4f6-a28f17377608] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 08:29:54.505886   10285 system_pods.go:89] "etcd-addons-213983" [5a688164-819a-45a6-9b96-152d7ee40517] Running
	I1129 08:29:54.505890   10285 system_pods.go:89] "kube-apiserver-addons-213983" [f8439e5b-c31b-4246-96cb-aa9d7f97d7ef] Running
	I1129 08:29:54.505894   10285 system_pods.go:89] "kube-controller-manager-addons-213983" [339e4d72-2288-4096-ad89-745879383d51] Running
	I1129 08:29:54.505900   10285 system_pods.go:89] "kube-ingress-dns-minikube" [0a82c192-8ed7-43b0-a6df-5452ef3d0494] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1129 08:29:54.505904   10285 system_pods.go:89] "kube-proxy-m7v4z" [73957cad-6d0d-405f-aeff-777f57eb12f5] Running
	I1129 08:29:54.505909   10285 system_pods.go:89] "kube-scheduler-addons-213983" [6bfea183-1d9b-4cc0-bad3-2c7e062680e6] Running
	I1129 08:29:54.505914   10285 system_pods.go:89] "metrics-server-85b7d694d7-frgcn" [f71b082d-7406-491a-9d31-dd48f8c0106e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1129 08:29:54.505921   10285 system_pods.go:89] "nvidia-device-plugin-daemonset-c9l66" [5e8b5d05-ea15-45b9-8a44-7d40d4d34c68] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1129 08:29:54.505928   10285 system_pods.go:89] "registry-6b586f9694-pw672" [464566ae-151b-4294-8a2a-b34e5c6562ec] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1129 08:29:54.505933   10285 system_pods.go:89] "registry-creds-764b6fb674-k52zl" [8ca120a2-be4b-423e-ab6b-09f9336b7bb6] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1129 08:29:54.505939   10285 system_pods.go:89] "registry-proxy-7cbkh" [73ffdfdf-bd95-4081-8154-0ffcb209c237] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1129 08:29:54.505944   10285 system_pods.go:89] "storage-provisioner" [ac55ea6b-02ab-4229-baef-8d112e0991a8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1129 08:29:54.505950   10285 system_pods.go:126] duration metric: took 41.258922ms to wait for k8s-apps to be running ...
	I1129 08:29:54.505969   10285 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 08:29:54.506017   10285 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:29:54.552486   10285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.948168474s)
	W1129 08:29:54.552533   10285 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1129 08:29:54.552554   10285 retry.go:31] will retry after 247.654712ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1129 08:29:54.800965   10285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1129 08:29:54.873527   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:54.873566   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:55.235582   10285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.989183391s)
	I1129 08:29:55.235622   10285 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-213983"
	I1129 08:29:55.235648   10285 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.891866959s)
	I1129 08:29:55.235696   10285 system_svc.go:56] duration metric: took 729.720259ms WaitForService to wait for kubelet
	I1129 08:29:55.235729   10285 kubeadm.go:587] duration metric: took 9.864212068s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 08:29:55.235753   10285 node_conditions.go:102] verifying NodePressure condition ...
	I1129 08:29:55.237106   10285 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1129 08:29:55.237117   10285 out.go:179] * Verifying csi-hostpath-driver addon...
	I1129 08:29:55.238331   10285 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1129 08:29:55.238952   10285 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1129 08:29:55.239512   10285 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1129 08:29:55.239526   10285 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1129 08:29:55.268614   10285 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1129 08:29:55.268648   10285 node_conditions.go:123] node cpu capacity is 2
	I1129 08:29:55.268668   10285 node_conditions.go:105] duration metric: took 32.909135ms to run NodePressure ...
	I1129 08:29:55.268684   10285 start.go:242] waiting for startup goroutines ...
	I1129 08:29:55.277133   10285 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1129 08:29:55.277153   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:55.353662   10285 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1129 08:29:55.353692   10285 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1129 08:29:55.376555   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:55.384340   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:55.448459   10285 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1129 08:29:55.448486   10285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1129 08:29:55.573848   10285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1129 08:29:55.748955   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:55.868901   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:55.869302   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:56.245410   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:56.374968   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:56.378889   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:56.542855   10285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.74182016s)
	I1129 08:29:56.787443   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:56.933034   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:56.933223   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:56.956624   10285 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.382725483s)
	I1129 08:29:56.958035   10285 addons.go:495] Verifying addon gcp-auth=true in "addons-213983"
	I1129 08:29:56.960407   10285 out.go:179] * Verifying gcp-auth addon...
	I1129 08:29:56.962861   10285 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1129 08:29:57.021421   10285 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1129 08:29:57.021442   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:57.247152   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:57.375229   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:57.375534   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:57.473417   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:57.745624   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:57.874011   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:57.874157   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:57.975047   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:58.243606   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:58.367918   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:58.368000   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:58.471157   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:58.743273   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:58.867803   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:58.867914   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:58.966463   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:59.243768   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:59.368298   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:59.368452   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:59.466849   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:29:59.743680   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:29:59.868147   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:29:59.868924   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:29:59.966363   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:00.243613   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:00.374313   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:00.374701   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:00.470011   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:00.746337   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:00.868634   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:00.870398   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:00.967413   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:01.244258   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:01.368998   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:01.369320   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:01.468051   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:01.746796   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:01.869160   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:01.870491   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:01.966265   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:02.243934   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:02.369269   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:02.371107   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:02.467442   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:02.743600   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:02.868605   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:02.870572   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:02.967150   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:03.243722   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:03.368339   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:03.368464   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:03.467760   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:03.744733   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:03.868194   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:03.868196   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:03.966008   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:04.243290   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:04.369559   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:04.369962   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:04.470087   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:04.742824   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:04.868177   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:04.868709   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:04.966062   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:05.243907   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:05.367872   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:05.368137   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:05.466559   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:05.744521   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:05.867734   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:05.867890   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:05.966846   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:06.243048   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:06.368458   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:06.368877   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:06.467664   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:06.745480   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:06.868560   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:06.870055   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:06.967457   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:07.243694   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:07.369420   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:07.371127   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:07.466486   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:07.746600   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:07.867793   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:07.868384   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:07.966886   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:08.243881   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:08.368041   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:08.369631   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:08.467193   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:08.746895   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:08.868128   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:08.868664   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:08.966909   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:09.245260   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:09.368281   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:09.369515   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:09.466583   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:09.746382   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:09.868170   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:09.868918   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:09.967112   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:10.244740   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:10.374740   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:10.374883   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:10.472570   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:10.744452   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:10.869226   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:10.869879   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:10.970263   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:11.244375   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:11.368495   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:11.368604   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:11.467043   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:11.745317   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:11.869283   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:11.870318   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:11.966709   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:12.247739   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:12.372372   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:12.374391   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:12.467243   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:12.812030   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:12.874805   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:12.875140   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:12.968330   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:13.265435   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:13.375282   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:13.375717   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:13.482007   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:13.816908   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:13.869720   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:13.870274   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:13.968141   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:14.246005   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:14.372791   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:14.391844   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:14.467235   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:14.743524   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:15.153184   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:15.154605   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:15.154964   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:15.254293   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:15.369867   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:15.371580   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:15.467436   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:15.746362   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:15.873292   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:15.873298   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:15.968063   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:16.244255   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:16.371610   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:16.372664   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:16.467200   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:16.745877   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:16.869717   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:16.870818   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:16.970041   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:17.486519   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:17.489290   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:17.489605   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:17.490213   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:17.744784   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:17.867737   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:17.868060   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:17.966881   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:18.243704   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:18.369772   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:18.370007   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:18.466728   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:18.745758   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:18.869612   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:18.870396   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:18.966933   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:19.243027   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:19.367355   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:19.369289   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:19.466674   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:19.746714   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:19.868970   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:19.869114   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:19.969021   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:20.244796   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:20.368457   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:20.369633   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:20.471325   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:20.746475   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:20.872223   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:20.875263   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:20.968507   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:21.243119   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:21.367781   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:21.368214   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:21.468239   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:21.744152   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:21.867349   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:21.870202   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1129 08:30:21.967548   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:22.243764   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:22.372729   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:22.373978   10285 kapi.go:107] duration metric: took 28.00959182s to wait for kubernetes.io/minikube-addons=registry ...
	I1129 08:30:22.468957   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:22.742347   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:22.870402   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:22.970277   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:23.255711   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:23.369389   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:23.466210   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:23.745156   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:23.870241   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:23.969427   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:24.245072   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:24.368141   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:24.467099   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:24.742706   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:24.869790   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:24.966164   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:25.243171   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:25.368238   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:25.467811   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:25.742925   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:25.868557   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:25.966990   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:26.243998   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:26.368864   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:26.468536   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:26.743765   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:26.870111   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:26.968077   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:27.243568   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:27.368416   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:27.469036   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:27.747548   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:27.870517   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:27.969133   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:28.244180   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:28.371290   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:28.467172   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:28.745442   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:28.868156   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:28.966099   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:29.242767   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:29.368041   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:29.466166   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:29.860750   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:29.867992   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:29.967564   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:30.243912   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:30.368212   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:30.466925   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:30.743104   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:30.867224   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:30.966482   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:31.243633   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:31.368004   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:31.466160   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:31.743054   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:31.867120   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:31.966596   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:32.244005   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:32.367225   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:32.467012   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:32.744629   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:32.870001   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:32.967131   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:33.242869   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:33.369206   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:33.467295   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:33.946939   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:33.951376   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:33.968407   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:34.246412   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:34.371105   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:34.466867   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:34.744899   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:34.868908   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:34.969628   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:35.246414   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:35.368377   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:35.468383   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:35.744597   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:35.868656   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:35.968602   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:36.246897   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:36.368971   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:36.466117   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:36.742766   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:36.868262   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:36.967208   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:37.245941   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:37.367430   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:37.466162   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:37.742625   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:37.869232   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:37.968197   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:38.246418   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:38.368430   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:38.469287   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:38.746066   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:38.867497   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:38.969225   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:39.243461   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:39.368983   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:39.467498   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:39.744137   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:39.867155   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:39.967153   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:40.245372   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:40.368255   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:40.467211   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:40.743951   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:40.869583   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:40.966846   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:41.243233   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:41.367798   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:41.466301   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:41.744366   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:41.868145   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:41.967350   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:42.245557   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:42.369165   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:42.469151   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:42.743542   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:42.868406   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:42.966221   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:43.243456   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:43.370954   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:43.467241   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:43.743738   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:43.867979   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:43.967301   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:44.251998   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:44.367787   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:44.471375   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:44.743147   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:44.869963   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:44.972784   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:45.246292   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:45.371141   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:45.475282   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:45.743380   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:45.868248   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:45.967206   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:46.246007   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:46.367448   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:46.466684   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:46.745453   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:46.868614   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:46.966971   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:47.243521   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:47.367892   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:47.468358   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:47.745794   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:47.869783   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:47.970701   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:48.249982   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:48.370591   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:48.470001   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:48.745210   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:48.869905   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:48.970472   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:49.243430   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:49.368183   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:49.467665   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:49.743760   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:49.870458   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:49.966534   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:50.407161   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:50.407406   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:50.466690   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:50.743952   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:50.868128   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:50.969530   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:51.243029   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:51.367482   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:51.468312   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:51.745347   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:51.868399   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:51.969912   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:52.244119   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:52.368109   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:52.472893   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:52.743890   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:52.868408   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:52.967175   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:53.246602   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:53.367908   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:53.470856   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:53.753927   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:53.868583   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:53.968072   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:54.242905   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:54.370132   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:54.470775   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:54.748476   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:54.873218   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:54.968899   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:55.247175   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:55.367981   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:55.466258   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:55.743554   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:55.871499   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:55.967503   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:56.526581   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:56.530278   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:56.530735   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:56.744404   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:56.868585   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:56.967642   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:57.247002   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:57.368693   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:57.466758   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:57.743413   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:57.868167   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:57.967236   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:58.245331   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:58.369981   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:58.469722   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:58.744164   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:58.867652   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:58.966807   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:59.244294   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:59.369682   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:59.467115   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:30:59.745762   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:30:59.868907   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:30:59.966431   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:00.243602   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:31:00.368427   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:31:00.466598   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:00.751617   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:31:00.872007   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:31:00.967665   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:01.248678   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1129 08:31:01.369314   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:31:01.467684   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:01.748983   10285 kapi.go:107] duration metric: took 1m6.510029677s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1129 08:31:01.871216   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:31:01.970245   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:02.373972   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:31:02.481573   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:02.871317   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:31:02.970890   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:03.372349   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:31:03.468873   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:03.871022   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:31:03.969099   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:04.370803   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:31:04.468022   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:04.882504   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:31:04.967941   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:05.368993   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:31:05.466384   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:05.877463   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:31:05.969479   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:06.368772   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:31:06.467723   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:06.867391   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:31:06.969425   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:07.371906   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:31:07.469930   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:07.870390   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:31:07.966389   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:08.369041   10285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1129 08:31:08.466000   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:08.870080   10285 kapi.go:107] duration metric: took 1m14.506261483s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1129 08:31:08.969428   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:09.468870   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:09.966805   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:10.467613   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:10.968453   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:11.469668   10285 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1129 08:31:11.973185   10285 kapi.go:107] duration metric: took 1m15.010322568s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1129 08:31:11.974893   10285 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-213983 cluster.
	I1129 08:31:11.976089   10285 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1129 08:31:11.977315   10285 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1129 08:31:11.978942   10285 out.go:179] * Enabled addons: default-storageclass, inspektor-gadget, storage-provisioner, amd-gpu-device-plugin, registry-creds, ingress-dns, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1129 08:31:11.980060   10285 addons.go:530] duration metric: took 1m26.608519495s for enable addons: enabled=[default-storageclass inspektor-gadget storage-provisioner amd-gpu-device-plugin registry-creds ingress-dns nvidia-device-plugin cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1129 08:31:11.980100   10285 start.go:247] waiting for cluster config update ...
	I1129 08:31:11.980122   10285 start.go:256] writing updated cluster config ...
	I1129 08:31:11.980415   10285 ssh_runner.go:195] Run: rm -f paused
	I1129 08:31:11.991582   10285 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 08:31:11.998848   10285 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rd24h" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:31:12.004557   10285 pod_ready.go:94] pod "coredns-66bc5c9577-rd24h" is "Ready"
	I1129 08:31:12.004581   10285 pod_ready.go:86] duration metric: took 5.699719ms for pod "coredns-66bc5c9577-rd24h" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:31:12.014142   10285 pod_ready.go:83] waiting for pod "etcd-addons-213983" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:31:12.020218   10285 pod_ready.go:94] pod "etcd-addons-213983" is "Ready"
	I1129 08:31:12.020237   10285 pod_ready.go:86] duration metric: took 6.067111ms for pod "etcd-addons-213983" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:31:12.022385   10285 pod_ready.go:83] waiting for pod "kube-apiserver-addons-213983" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:31:12.032158   10285 pod_ready.go:94] pod "kube-apiserver-addons-213983" is "Ready"
	I1129 08:31:12.032178   10285 pod_ready.go:86] duration metric: took 9.775269ms for pod "kube-apiserver-addons-213983" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:31:12.037346   10285 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-213983" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:31:12.395809   10285 pod_ready.go:94] pod "kube-controller-manager-addons-213983" is "Ready"
	I1129 08:31:12.395853   10285 pod_ready.go:86] duration metric: took 358.479782ms for pod "kube-controller-manager-addons-213983" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:31:12.597695   10285 pod_ready.go:83] waiting for pod "kube-proxy-m7v4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:31:13.139559   10285 pod_ready.go:94] pod "kube-proxy-m7v4z" is "Ready"
	I1129 08:31:13.139586   10285 pod_ready.go:86] duration metric: took 541.868652ms for pod "kube-proxy-m7v4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:31:13.196707   10285 pod_ready.go:83] waiting for pod "kube-scheduler-addons-213983" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:31:13.596319   10285 pod_ready.go:94] pod "kube-scheduler-addons-213983" is "Ready"
	I1129 08:31:13.596344   10285 pod_ready.go:86] duration metric: took 399.614361ms for pod "kube-scheduler-addons-213983" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 08:31:13.596356   10285 pod_ready.go:40] duration metric: took 1.604739636s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 08:31:13.643938   10285 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 08:31:13.645916   10285 out.go:179] * Done! kubectl is now configured to use "addons-213983" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.796610427Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764405258796584947,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=946f6416-b9af-415d-811d-2274eed8d087 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.797611128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6abf8d5d-0847-4a14-9a88-6cfe717eace9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.797681872Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6abf8d5d-0847-4a14-9a88-6cfe717eace9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.798348715Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7bc4e18ca6c5dab2ce801212fec28a78128d0882fc8307828ed880b895938e08,PodSandboxId:432e7224b78c7b161a2be21a8b7bbab53497f66a5c79a01307b64a8644baea02,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764405115014563058,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 91d59907-f6f5-4a62-a39f-f6c5de4fe9d9,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841d5b89da650e1dbbe4dd4058833146baa40596f8195a53b5969ad91fcba67,PodSandboxId:612e270dc018797354806ca86d9d3d1851f04cbc6d16fb460cce8be7695ce87e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764405077839493236,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a81c966d-80ea-4cb4-af63-4079ae7f315c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ce39e48bcd4efdf93ec6ae9ac6259afc5314a6d30b1d575004c9c4284613d4,PodSandboxId:987423905f107bcc2c8bff06004c5a7712d918aea305f8a9902896c96518bb30,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764405067998832160,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-trgwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c1901104-dbb9-4f94-985c-be9b2573659a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5123cf8c492674047bece66e8d25e70c9415215170b4dcf664f9408c732f52e2,PodSandboxId:0206aa6e16097ba5832f879222212ee90ee3aafa8a3bd28ee8f00b627c511bfa,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764405045234351492,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-sdt5n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9b4a60cf-801d-4bbb-bce5-4466ffc64084,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b6a0e2d1080d9593b218ad14800cb8f9f8c334ea7b62daacbab3f926754b9b,PodSandboxId:d110170d76f8f92cadc70a3f8565813b367ad6e44a6e6b902a663b158a6597d0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764405042263362754,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6pwjs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bfcdd30b-5977-4429-9edd-e51256b78e35,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0809c238946e8de86876f44850a8ec9bbd17666d75ee004e69005d73d5230b1,PodSandboxId:f44261ca5ed9ae0d00419b44f4d75498926fbc0dc8ff5a875d581dbcac1b9567,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764405017640561952,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a82c192-8ed7-43b0-a6df-5452ef3d0494,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f17f5295293b31f77d80c14086bfa9c2cd414d94768bebc20fcb12c30cf114,PodSandboxId:d3b04238dbe870496aef45e077b76f8730046b39c8709e05b29cc87fe1df64ff,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764404996781287924,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-q6ggk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e721dc0e-2e23-4bcd-a8fa-d84566326095,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6f75a5c4aaa9fb8a464cfd62bd03fb6a2bffe85a33db52494aea5653c8306e,PodSandboxId:e776fdb06892740e49820c913d8520a06ee377ecdc696462040565c7ede0f898,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764404993918698532,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac55ea6b-02ab-4229-baef-8d112e0991a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b431d3c97999ea2a9b7bcc93124e7b9011bf4248880ec64639c3595b10390f3c,PodSandboxId:edd893bbc7243cd3a9e2d8f2f8cd9b3ec3a6d4dccd122c350b28de981ab634d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764404987035209701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rd24h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a814d4-a17e-46b0-b4f6-a28f17377608,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfcdece95bcaa0a43e1d67b2a9f46fb03b0d7344e00c67921261a8f9052f6ca9,PodSandboxId:4ce7296f1ca94ebf516ae25481f851d3be0bec2c79de195393744abc542e7369,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764404986391391450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7v4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73957cad-6d0d-405f-aeff-777f57eb12f5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:387233778fb07ffe3611bd0f9ac9f84ce4f6c243360c25147bafd7c9011e5c78,PodSandboxId:b3323ed97a78104d4e17073ca9fecbc5e67a5719299c6a9b791618bfeee2ce9f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764404973365391652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-213983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4225dc54e60f7eb12ae066d1c73bd50,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9de99a70df7744d8953823ac7c5cc23f2b0db870d180d548cd716b797710b0a,PodSandboxId:d7349af861acf32c2eef9fe6c7af0c1d6e6048724755a7b6f0ca088b82652f9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764404973326310533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-213983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 969049aec7e7d88eaade5e5b8fc06e4c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-por
t\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78844f6484972079bbc7688afa8900169cd9fc313a923e73b9eb0371544c633b,PodSandboxId:5a3de68070b028fb75d22617142de754b182ac9950f24398fc701e2599997be4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764404973320578218,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-213983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f030ef3593f7c8901d8e89c
1ce3a065,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b1fbfd71ea3bb00ff935b93d74a76eb9ac33487269e862f6979303b331a0b34,PodSandboxId:a662464c4d4fe01cc392820e335975d6dde952b886fb6f0b38bff913f2b73723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764404973281840812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiser
ver-addons-213983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41eaacfe946489b0e0104837a8d6a277,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6abf8d5d-0847-4a14-9a88-6cfe717eace9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.841635793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da31bc20-0f65-4c55-afca-7efc79403f27 name=/runtime.v1.RuntimeService/Version
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.841725936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da31bc20-0f65-4c55-afca-7efc79403f27 name=/runtime.v1.RuntimeService/Version
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.843359807Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86c5eee7-9d0c-42c0-863b-c342be79b4bc name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.844566511Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764405258844539618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86c5eee7-9d0c-42c0-863b-c342be79b4bc name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.845505780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2c7333c-f04f-488b-ae1d-8833c0a5ca14 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.845579617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2c7333c-f04f-488b-ae1d-8833c0a5ca14 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.845904060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7bc4e18ca6c5dab2ce801212fec28a78128d0882fc8307828ed880b895938e08,PodSandboxId:432e7224b78c7b161a2be21a8b7bbab53497f66a5c79a01307b64a8644baea02,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764405115014563058,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 91d59907-f6f5-4a62-a39f-f6c5de4fe9d9,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841d5b89da650e1dbbe4dd4058833146baa40596f8195a53b5969ad91fcba67,PodSandboxId:612e270dc018797354806ca86d9d3d1851f04cbc6d16fb460cce8be7695ce87e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764405077839493236,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a81c966d-80ea-4cb4-af63-4079ae7f315c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ce39e48bcd4efdf93ec6ae9ac6259afc5314a6d30b1d575004c9c4284613d4,PodSandboxId:987423905f107bcc2c8bff06004c5a7712d918aea305f8a9902896c96518bb30,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764405067998832160,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-trgwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c1901104-dbb9-4f94-985c-be9b2573659a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5123cf8c492674047bece66e8d25e70c9415215170b4dcf664f9408c732f52e2,PodSandboxId:0206aa6e16097ba5832f879222212ee90ee3aafa8a3bd28ee8f00b627c511bfa,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764405045234351492,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-sdt5n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9b4a60cf-801d-4bbb-bce5-4466ffc64084,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b6a0e2d1080d9593b218ad14800cb8f9f8c334ea7b62daacbab3f926754b9b,PodSandboxId:d110170d76f8f92cadc70a3f8565813b367ad6e44a6e6b902a663b158a6597d0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764405042263362754,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6pwjs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bfcdd30b-5977-4429-9edd-e51256b78e35,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0809c238946e8de86876f44850a8ec9bbd17666d75ee004e69005d73d5230b1,PodSandboxId:f44261ca5ed9ae0d00419b44f4d75498926fbc0dc8ff5a875d581dbcac1b9567,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764405017640561952,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a82c192-8ed7-43b0-a6df-5452ef3d0494,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f17f5295293b31f77d80c14086bfa9c2cd414d94768bebc20fcb12c30cf114,PodSandboxId:d3b04238dbe870496aef45e077b76f8730046b39c8709e05b29cc87fe1df64ff,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764404996781287924,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-q6ggk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e721dc0e-2e23-4bcd-a8fa-d84566326095,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6f75a5c4aaa9fb8a464cfd62bd03fb6a2bffe85a33db52494aea5653c8306e,PodSandboxId:e776fdb06892740e49820c913d8520a06ee377ecdc696462040565c7ede0f898,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764404993918698532,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac55ea6b-02ab-4229-baef-8d112e0991a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b431d3c97999ea2a9b7bcc93124e7b9011bf4248880ec64639c3595b10390f3c,PodSandboxId:edd893bbc7243cd3a9e2d8f2f8cd9b3ec3a6d4dccd122c350b28de981ab634d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764404987035209701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rd24h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a814d4-a17e-46b0-b4f6-a28f17377608,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfcdece95bcaa0a43e1d67b2a9f46fb03b0d7344e00c67921261a8f9052f6ca9,PodSandboxId:4ce7296f1ca94ebf516ae25481f851d3be0bec2c79de195393744abc542e7369,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764404986391391450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7v4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73957cad-6d0d-405f-aeff-777f57eb12f5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:387233778fb07ffe3611bd0f9ac9f84ce4f6c243360c25147bafd7c9011e5c78,PodSandboxId:b3323ed97a78104d4e17073ca9fecbc5e67a5719299c6a9b791618bfeee2ce9f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764404973365391652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-213983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4225dc54e60f7eb12ae066d1c73bd50,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9de99a70df7744d8953823ac7c5cc23f2b0db870d180d548cd716b797710b0a,PodSandboxId:d7349af861acf32c2eef9fe6c7af0c1d6e6048724755a7b6f0ca088b82652f9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764404973326310533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-213983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 969049aec7e7d88eaade5e5b8fc06e4c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-por
t\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78844f6484972079bbc7688afa8900169cd9fc313a923e73b9eb0371544c633b,PodSandboxId:5a3de68070b028fb75d22617142de754b182ac9950f24398fc701e2599997be4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764404973320578218,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-213983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f030ef3593f7c8901d8e89c
1ce3a065,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b1fbfd71ea3bb00ff935b93d74a76eb9ac33487269e862f6979303b331a0b34,PodSandboxId:a662464c4d4fe01cc392820e335975d6dde952b886fb6f0b38bff913f2b73723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764404973281840812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiser
ver-addons-213983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41eaacfe946489b0e0104837a8d6a277,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2c7333c-f04f-488b-ae1d-8833c0a5ca14 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.876252397Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=754a54b0-b5dc-4bf7-8ae2-f7c2410e8f1f name=/runtime.v1.RuntimeService/Version
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.876341225Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=754a54b0-b5dc-4bf7-8ae2-f7c2410e8f1f name=/runtime.v1.RuntimeService/Version
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.877679667Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a019faca-55fd-44cd-a800-289da6c6287f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.879007140Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764405258878977602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a019faca-55fd-44cd-a800-289da6c6287f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.880267804Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbc372e2-abfc-426e-9584-31c18d4aea37 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.880449624Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbc372e2-abfc-426e-9584-31c18d4aea37 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.880767804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7bc4e18ca6c5dab2ce801212fec28a78128d0882fc8307828ed880b895938e08,PodSandboxId:432e7224b78c7b161a2be21a8b7bbab53497f66a5c79a01307b64a8644baea02,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764405115014563058,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 91d59907-f6f5-4a62-a39f-f6c5de4fe9d9,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841d5b89da650e1dbbe4dd4058833146baa40596f8195a53b5969ad91fcba67,PodSandboxId:612e270dc018797354806ca86d9d3d1851f04cbc6d16fb460cce8be7695ce87e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764405077839493236,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a81c966d-80ea-4cb4-af63-4079ae7f315c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ce39e48bcd4efdf93ec6ae9ac6259afc5314a6d30b1d575004c9c4284613d4,PodSandboxId:987423905f107bcc2c8bff06004c5a7712d918aea305f8a9902896c96518bb30,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764405067998832160,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-trgwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c1901104-dbb9-4f94-985c-be9b2573659a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5123cf8c492674047bece66e8d25e70c9415215170b4dcf664f9408c732f52e2,PodSandboxId:0206aa6e16097ba5832f879222212ee90ee3aafa8a3bd28ee8f00b627c511bfa,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764405045234351492,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-sdt5n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9b4a60cf-801d-4bbb-bce5-4466ffc64084,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b6a0e2d1080d9593b218ad14800cb8f9f8c334ea7b62daacbab3f926754b9b,PodSandboxId:d110170d76f8f92cadc70a3f8565813b367ad6e44a6e6b902a663b158a6597d0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764405042263362754,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6pwjs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bfcdd30b-5977-4429-9edd-e51256b78e35,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0809c238946e8de86876f44850a8ec9bbd17666d75ee004e69005d73d5230b1,PodSandboxId:f44261ca5ed9ae0d00419b44f4d75498926fbc0dc8ff5a875d581dbcac1b9567,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764405017640561952,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a82c192-8ed7-43b0-a6df-5452ef3d0494,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f17f5295293b31f77d80c14086bfa9c2cd414d94768bebc20fcb12c30cf114,PodSandboxId:d3b04238dbe870496aef45e077b76f8730046b39c8709e05b29cc87fe1df64ff,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764404996781287924,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-q6ggk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e721dc0e-2e23-4bcd-a8fa-d84566326095,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6f75a5c4aaa9fb8a464cfd62bd03fb6a2bffe85a33db52494aea5653c8306e,PodSandboxId:e776fdb06892740e49820c913d8520a06ee377ecdc696462040565c7ede0f898,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764404993918698532,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac55ea6b-02ab-4229-baef-8d112e0991a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b431d3c97999ea2a9b7bcc93124e7b9011bf4248880ec64639c3595b10390f3c,PodSandboxId:edd893bbc7243cd3a9e2d8f2f8cd9b3ec3a6d4dccd122c350b28de981ab634d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764404987035209701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rd24h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a814d4-a17e-46b0-b4f6-a28f17377608,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfcdece95bcaa0a43e1d67b2a9f46fb03b0d7344e00c67921261a8f9052f6ca9,PodSandboxId:4ce7296f1ca94ebf516ae25481f851d3be0bec2c79de195393744abc542e7369,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764404986391391450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7v4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73957cad-6d0d-405f-aeff-777f57eb12f5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:387233778fb07ffe3611bd0f9ac9f84ce4f6c243360c25147bafd7c9011e5c78,PodSandboxId:b3323ed97a78104d4e17073ca9fecbc5e67a5719299c6a9b791618bfeee2ce9f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764404973365391652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-213983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4225dc54e60f7eb12ae066d1c73bd50,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9de99a70df7744d8953823ac7c5cc23f2b0db870d180d548cd716b797710b0a,PodSandboxId:d7349af861acf32c2eef9fe6c7af0c1d6e6048724755a7b6f0ca088b82652f9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764404973326310533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-213983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 969049aec7e7d88eaade5e5b8fc06e4c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-por
t\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78844f6484972079bbc7688afa8900169cd9fc313a923e73b9eb0371544c633b,PodSandboxId:5a3de68070b028fb75d22617142de754b182ac9950f24398fc701e2599997be4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764404973320578218,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-213983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f030ef3593f7c8901d8e89c
1ce3a065,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b1fbfd71ea3bb00ff935b93d74a76eb9ac33487269e862f6979303b331a0b34,PodSandboxId:a662464c4d4fe01cc392820e335975d6dde952b886fb6f0b38bff913f2b73723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764404973281840812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiser
ver-addons-213983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41eaacfe946489b0e0104837a8d6a277,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bbc372e2-abfc-426e-9584-31c18d4aea37 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.913513378Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=089cf21d-ca96-4cf3-8bd3-d3a3961601e6 name=/runtime.v1.RuntimeService/Version
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.914209507Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=089cf21d-ca96-4cf3-8bd3-d3a3961601e6 name=/runtime.v1.RuntimeService/Version
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.919440496Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9ade34d-35cc-4c39-bcf0-20a1f424ae49 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.921274847Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764405258921246176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588567,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9ade34d-35cc-4c39-bcf0-20a1f424ae49 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.922304518Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3627bacd-6765-4068-a5d1-e457ec313bcf name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.922378816Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3627bacd-6765-4068-a5d1-e457ec313bcf name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 08:34:18 addons-213983 crio[804]: time="2025-11-29 08:34:18.922752284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7bc4e18ca6c5dab2ce801212fec28a78128d0882fc8307828ed880b895938e08,PodSandboxId:432e7224b78c7b161a2be21a8b7bbab53497f66a5c79a01307b64a8644baea02,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1764405115014563058,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 91d59907-f6f5-4a62-a39f-f6c5de4fe9d9,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f841d5b89da650e1dbbe4dd4058833146baa40596f8195a53b5969ad91fcba67,PodSandboxId:612e270dc018797354806ca86d9d3d1851f04cbc6d16fb460cce8be7695ce87e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1764405077839493236,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a81c966d-80ea-4cb4-af63-4079ae7f315c,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60ce39e48bcd4efdf93ec6ae9ac6259afc5314a6d30b1d575004c9c4284613d4,PodSandboxId:987423905f107bcc2c8bff06004c5a7712d918aea305f8a9902896c96518bb30,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:97fe896f8c07b0249ba6fb4d239c469b3db26a58d56dc36d65485838e6762bab,State:CONTAINER_RUNNING,CreatedAt:1764405067998832160,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-6c8bf45fb-trgwz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c1901104-dbb9-4f94-985c-be9b2573659a,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: ee716186,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5123cf8c492674047bece66e8d25e70c9415215170b4dcf664f9408c732f52e2,PodSandboxId:0206aa6e16097ba5832f879222212ee90ee3aafa8a3bd28ee8f00b627c511bfa,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:884bd0ac01
c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764405045234351492,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-sdt5n,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 9b4a60cf-801d-4bbb-bce5-4466ffc64084,},Annotations:map[string]string{io.kubernetes.container.hash: a8b1ca00,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b6a0e2d1080d9593b218ad14800cb8f9f8c334ea7b62daacbab3f926754b9b,PodSandboxId:d110170d76f8f92cadc70a3f8565813b367ad6e44a6e6b902a663b158a6597d0,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45,State:CONTAINER_EXITED,CreatedAt:1764405042263362754,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-6pwjs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bfcdd30b-5977-4429-9edd-e51256b78e35,},Annotations:map[string]string{io.kubernetes.container.hash: 41135a0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0809c238946e8de86876f44850a8ec9bbd17666d75ee004e69005d73d5230b1,PodSandboxId:f44261ca5ed9ae0d00419b44f4d75498926fbc0dc8ff5a875d581dbcac1b9567,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:m
ap[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1764405017640561952,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a82c192-8ed7-43b0-a6df-5452ef3d0494,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26f17f5295293b31f77d80c14086bfa9c2cd414d94768bebc20fcb12c30cf114,PodSandboxId:d3b04238dbe870496aef45e077b76f8730046b39c8709e05b29cc87fe1df64ff,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&Imag
eSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1764404996781287924,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-q6ggk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e721dc0e-2e23-4bcd-a8fa-d84566326095,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6f75a5c4aaa9fb8a464cfd62bd03fb6a2bffe85a33db52494aea5653c8306e,PodSandboxId:e776fdb06892740e49820c913d8520a06ee377ecdc696462040565c7ede0f898,Metadata:&ContainerMetadata{Name:stora
ge-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764404993918698532,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac55ea6b-02ab-4229-baef-8d112e0991a8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b431d3c97999ea2a9b7bcc93124e7b9011bf4248880ec64639c3595b10390f3c,PodSandboxId:edd893bbc7243cd3a9e2d8f2f8cd9b3ec3a6d4dccd122c350b28de981ab634d7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0
,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764404987035209701,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rd24h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 19a814d4-a17e-46b0-b4f6-a28f17377608,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfcdece95bcaa0a43e1d67b2a9f46fb03b0d7344e00c67921261a8f9052f6ca9,PodSandboxId:4ce7296f1ca94ebf516ae25481f851d3be0bec2c79de195393744abc542e7369,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764404986391391450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-m7v4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 73957cad-6d0d-405f-aeff-777f57eb12f5,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:387233778fb07ffe3611bd0f9ac9f84ce4f6c243360c25147bafd7c9011e5c78,PodSandboxId:b3323ed97a78104d4e17073ca9fecbc5e67a5719299c6a9b791618bfeee2ce9f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764404973365391652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-213983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a4225dc54e60f7eb12ae066d1c73bd50,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9de99a70df7744d8953823ac7c5cc23f2b0db870d180d548cd716b797710b0a,PodSandboxId:d7349af861acf32c2eef9fe6c7af0c1d6e6048724755a7b6f0ca088b82652f9d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764404973326310533,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-213983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 969049aec7e7d88eaade5e5b8fc06e4c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-por
t\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78844f6484972079bbc7688afa8900169cd9fc313a923e73b9eb0371544c633b,PodSandboxId:5a3de68070b028fb75d22617142de754b182ac9950f24398fc701e2599997be4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764404973320578218,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-213983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f030ef3593f7c8901d8e89c
1ce3a065,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b1fbfd71ea3bb00ff935b93d74a76eb9ac33487269e862f6979303b331a0b34,PodSandboxId:a662464c4d4fe01cc392820e335975d6dde952b886fb6f0b38bff913f2b73723,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764404973281840812,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiser
ver-addons-213983,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41eaacfe946489b0e0104837a8d6a277,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3627bacd-6765-4068-a5d1-e457ec313bcf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                        NAMESPACE
	7bc4e18ca6c5d       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago       Running             nginx                     0                   432e7224b78c7       nginx                                      default
	f841d5b89da65       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   612e270dc0187       busybox                                    default
	60ce39e48bcd4       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27             3 minutes ago       Running             controller                0                   987423905f107       ingress-nginx-controller-6c8bf45fb-trgwz   ingress-nginx
	5123cf8c49267       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago       Exited              patch                     0                   0206aa6e16097       ingress-nginx-admission-patch-sdt5n        ingress-nginx
	d7b6a0e2d1080       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:bcfc926ed57831edf102d62c5c0e259572591df4796ef1420b87f9cf6092497f   3 minutes ago       Exited              create                    0                   d110170d76f8f       ingress-nginx-admission-create-6pwjs       ingress-nginx
	b0809c238946e       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   f44261ca5ed9a       kube-ingress-dns-minikube                  kube-system
	26f17f5295293       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   d3b04238dbe87       amd-gpu-device-plugin-q6ggk                kube-system
	ec6f75a5c4aaa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   e776fdb068927       storage-provisioner                        kube-system
	b431d3c97999e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   edd893bbc7243       coredns-66bc5c9577-rd24h                   kube-system
	cfcdece95bcaa       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             4 minutes ago       Running             kube-proxy                0                   4ce7296f1ca94       kube-proxy-m7v4z                           kube-system
	387233778fb07       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             4 minutes ago       Running             etcd                      0                   b3323ed97a781       etcd-addons-213983                         kube-system
	e9de99a70df77       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             4 minutes ago       Running             kube-scheduler            0                   d7349af861acf       kube-scheduler-addons-213983               kube-system
	78844f6484972       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             4 minutes ago       Running             kube-controller-manager   0                   5a3de68070b02       kube-controller-manager-addons-213983      kube-system
	9b1fbfd71ea3b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             4 minutes ago       Running             kube-apiserver            0                   a662464c4d4fe       kube-apiserver-addons-213983               kube-system
	
	
	==> coredns [b431d3c97999ea2a9b7bcc93124e7b9011bf4248880ec64639c3595b10390f3c] <==
	[INFO] 10.244.0.7:45485 - 61423 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000177215s
	[INFO] 10.244.0.7:45485 - 41511 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000153503s
	[INFO] 10.244.0.7:45485 - 39013 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000339108s
	[INFO] 10.244.0.7:45485 - 2092 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000113132s
	[INFO] 10.244.0.7:45485 - 54339 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000108788s
	[INFO] 10.244.0.7:45485 - 38097 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000145308s
	[INFO] 10.244.0.7:45485 - 42467 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000098918s
	[INFO] 10.244.0.7:48827 - 23137 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000218538s
	[INFO] 10.244.0.7:48827 - 23420 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000298955s
	[INFO] 10.244.0.7:34913 - 58878 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000115086s
	[INFO] 10.244.0.7:34913 - 59140 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000279383s
	[INFO] 10.244.0.7:45389 - 45525 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000131206s
	[INFO] 10.244.0.7:45389 - 45062 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081568s
	[INFO] 10.244.0.7:46837 - 20236 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000093458s
	[INFO] 10.244.0.7:46837 - 20680 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00040617s
	[INFO] 10.244.0.23:55517 - 15090 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000511272s
	[INFO] 10.244.0.23:33803 - 24026 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000221537s
	[INFO] 10.244.0.23:59360 - 3369 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000141495s
	[INFO] 10.244.0.23:38152 - 50259 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139201s
	[INFO] 10.244.0.23:36230 - 43984 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000262096s
	[INFO] 10.244.0.23:59692 - 52702 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000072245s
	[INFO] 10.244.0.23:34036 - 56370 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001609655s
	[INFO] 10.244.0.23:54579 - 31177 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.004497217s
	[INFO] 10.244.0.27:53785 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000306209s
	[INFO] 10.244.0.27:51860 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00156338s
	
	
	==> describe nodes <==
	Name:               addons-213983
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-213983
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=addons-213983
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T08_29_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-213983
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 08:29:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-213983
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 08:34:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 08:32:13 +0000   Sat, 29 Nov 2025 08:29:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 08:32:13 +0000   Sat, 29 Nov 2025 08:29:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 08:32:13 +0000   Sat, 29 Nov 2025 08:29:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 08:32:13 +0000   Sat, 29 Nov 2025 08:29:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.35
	  Hostname:    addons-213983
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 5dcc2ad0e1064ad5a099f42601726d5c
	  System UUID:                5dcc2ad0-e106-4ad5-a099-f42601726d5c
	  Boot ID:                    f97cdbee-bb9c-4327-b003-244db77590a6
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m5s
	  default                     hello-world-app-5d498dc89-ggxlt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-trgwz    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m26s
	  kube-system                 amd-gpu-device-plugin-q6ggk                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m31s
	  kube-system                 coredns-66bc5c9577-rd24h                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m34s
	  kube-system                 etcd-addons-213983                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m40s
	  kube-system                 kube-apiserver-addons-213983                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-controller-manager-addons-213983       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-proxy-m7v4z                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 kube-scheduler-addons-213983                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m31s  kube-proxy       
	  Normal  Starting                 4m40s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m40s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m40s  kubelet          Node addons-213983 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s  kubelet          Node addons-213983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s  kubelet          Node addons-213983 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m39s  kubelet          Node addons-213983 status is now: NodeReady
	  Normal  RegisteredNode           4m35s  node-controller  Node addons-213983 event: Registered Node addons-213983 in Controller
	
	
	==> dmesg <==
	[Nov29 08:30] kauditd_printk_skb: 356 callbacks suppressed
	[ +13.320650] kauditd_printk_skb: 26 callbacks suppressed
	[  +6.162841] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.816673] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.107455] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.095088] kauditd_printk_skb: 26 callbacks suppressed
	[  +1.939224] kauditd_printk_skb: 136 callbacks suppressed
	[  +5.676421] kauditd_printk_skb: 76 callbacks suppressed
	[  +0.000855] kauditd_printk_skb: 120 callbacks suppressed
	[Nov29 08:31] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.000108] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.295100] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.696966] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.008044] kauditd_printk_skb: 22 callbacks suppressed
	[  +4.655802] kauditd_printk_skb: 38 callbacks suppressed
	[  +1.740281] kauditd_printk_skb: 156 callbacks suppressed
	[  +0.502352] kauditd_printk_skb: 235 callbacks suppressed
	[  +1.590195] kauditd_printk_skb: 86 callbacks suppressed
	[Nov29 08:32] kauditd_printk_skb: 39 callbacks suppressed
	[  +6.084566] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.547433] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.973339] kauditd_printk_skb: 10 callbacks suppressed
	[  +0.000056] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.851496] kauditd_printk_skb: 41 callbacks suppressed
	[Nov29 08:34] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [387233778fb07ffe3611bd0f9ac9f84ce4f6c243360c25147bafd7c9011e5c78] <==
	{"level":"warn","ts":"2025-11-29T08:30:56.520347Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-29T08:30:56.177014Z","time spent":"343.129246ms","remote":"127.0.0.1:58168","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3132,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" mod_revision:848 > success:<request_put:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" value_size:3080 >> failure:<request_range:<key:\"/registry/jobs/gcp-auth/gcp-auth-certs-patch\" > >"}
	{"level":"info","ts":"2025-11-29T08:31:04.879023Z","caller":"traceutil/trace.go:172","msg":"trace[1142753890] transaction","detail":"{read_only:false; response_revision:1133; number_of_response:1; }","duration":"199.646069ms","start":"2025-11-29T08:31:04.679362Z","end":"2025-11-29T08:31:04.879008Z","steps":["trace[1142753890] 'process raft request'  (duration: 199.490043ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T08:31:13.132809Z","caller":"traceutil/trace.go:172","msg":"trace[2003606447] linearizableReadLoop","detail":"{readStateIndex:1213; appliedIndex:1213; }","duration":"141.636149ms","start":"2025-11-29T08:31:12.991156Z","end":"2025-11-29T08:31:13.132793Z","steps":["trace[2003606447] 'read index received'  (duration: 141.630494ms)","trace[2003606447] 'applied index is now lower than readState.Index'  (duration: 5.079µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-29T08:31:13.133055Z","caller":"traceutil/trace.go:172","msg":"trace[1181990154] transaction","detail":"{read_only:false; response_revision:1175; number_of_response:1; }","duration":"148.54987ms","start":"2025-11-29T08:31:12.984494Z","end":"2025-11-29T08:31:13.133044Z","steps":["trace[1181990154] 'process raft request'  (duration: 148.349208ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T08:31:13.133084Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.883225ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-213983\" limit:1 ","response":"range_response_count:1 size:10427"}
	{"level":"info","ts":"2025-11-29T08:31:13.133111Z","caller":"traceutil/trace.go:172","msg":"trace[1968632693] range","detail":"{range_begin:/registry/minions/addons-213983; range_end:; response_count:1; response_revision:1174; }","duration":"141.952927ms","start":"2025-11-29T08:31:12.991151Z","end":"2025-11-29T08:31:13.133104Z","steps":["trace[1968632693] 'agreement among raft nodes before linearized reading'  (duration: 141.736245ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T08:31:13.133304Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.606886ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-29T08:31:13.133322Z","caller":"traceutil/trace.go:172","msg":"trace[8564932] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1175; }","duration":"101.630787ms","start":"2025-11-29T08:31:13.031686Z","end":"2025-11-29T08:31:13.133317Z","steps":["trace[8564932] 'agreement among raft nodes before linearized reading'  (duration: 101.596231ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T08:31:39.433504Z","caller":"traceutil/trace.go:172","msg":"trace[1592371158] transaction","detail":"{read_only:false; response_revision:1345; number_of_response:1; }","duration":"121.124532ms","start":"2025-11-29T08:31:39.312366Z","end":"2025-11-29T08:31:39.433490Z","steps":["trace[1592371158] 'process raft request'  (duration: 120.015082ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T08:31:39.600907Z","caller":"traceutil/trace.go:172","msg":"trace[2144474277] linearizableReadLoop","detail":"{readStateIndex:1392; appliedIndex:1392; }","duration":"160.636069ms","start":"2025-11-29T08:31:39.440000Z","end":"2025-11-29T08:31:39.600636Z","steps":["trace[2144474277] 'read index received'  (duration: 160.628972ms)","trace[2144474277] 'applied index is now lower than readState.Index'  (duration: 5.834µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T08:31:39.602578Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"162.577212ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-29T08:31:39.602629Z","caller":"traceutil/trace.go:172","msg":"trace[983991917] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1345; }","duration":"162.640643ms","start":"2025-11-29T08:31:39.439978Z","end":"2025-11-29T08:31:39.602619Z","steps":["trace[983991917] 'agreement among raft nodes before linearized reading'  (duration: 161.723878ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T08:31:39.604496Z","caller":"traceutil/trace.go:172","msg":"trace[432771620] transaction","detail":"{read_only:false; response_revision:1346; number_of_response:1; }","duration":"287.355125ms","start":"2025-11-29T08:31:39.317129Z","end":"2025-11-29T08:31:39.604484Z","steps":["trace[432771620] 'process raft request'  (duration: 284.557005ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T08:31:39.604734Z","caller":"traceutil/trace.go:172","msg":"trace[1361626722] transaction","detail":"{read_only:false; response_revision:1347; number_of_response:1; }","duration":"147.363203ms","start":"2025-11-29T08:31:39.457365Z","end":"2025-11-29T08:31:39.604729Z","steps":["trace[1361626722] 'process raft request'  (duration: 145.98679ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T08:31:54.906963Z","caller":"traceutil/trace.go:172","msg":"trace[210847603] linearizableReadLoop","detail":"{readStateIndex:1580; appliedIndex:1580; }","duration":"222.576343ms","start":"2025-11-29T08:31:54.684326Z","end":"2025-11-29T08:31:54.906902Z","steps":["trace[210847603] 'read index received'  (duration: 222.570859ms)","trace[210847603] 'applied index is now lower than readState.Index'  (duration: 4.599µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T08:31:54.907178Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"222.853167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" limit:1 ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2025-11-29T08:31:54.907211Z","caller":"traceutil/trace.go:172","msg":"trace[1493604083] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1524; }","duration":"222.901757ms","start":"2025-11-29T08:31:54.684303Z","end":"2025-11-29T08:31:54.907205Z","steps":["trace[1493604083] 'agreement among raft nodes before linearized reading'  (duration: 222.771147ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T08:31:54.907533Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.347833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-29T08:31:54.907578Z","caller":"traceutil/trace.go:172","msg":"trace[1616910499] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1525; }","duration":"122.397918ms","start":"2025-11-29T08:31:54.785174Z","end":"2025-11-29T08:31:54.907572Z","steps":["trace[1616910499] 'agreement among raft nodes before linearized reading'  (duration: 122.334026ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T08:31:54.907857Z","caller":"traceutil/trace.go:172","msg":"trace[2033351446] transaction","detail":"{read_only:false; response_revision:1525; number_of_response:1; }","duration":"279.559242ms","start":"2025-11-29T08:31:54.628289Z","end":"2025-11-29T08:31:54.907849Z","steps":["trace[2033351446] 'process raft request'  (duration: 279.1291ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T08:32:07.533649Z","caller":"traceutil/trace.go:172","msg":"trace[1588990179] linearizableReadLoop","detail":"{readStateIndex:1661; appliedIndex:1661; }","duration":"136.277677ms","start":"2025-11-29T08:32:07.397353Z","end":"2025-11-29T08:32:07.533630Z","steps":["trace[1588990179] 'read index received'  (duration: 136.265545ms)","trace[1588990179] 'applied index is now lower than readState.Index'  (duration: 11.223µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T08:32:07.533816Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.399437ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-29T08:32:07.533841Z","caller":"traceutil/trace.go:172","msg":"trace[783164902] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1601; }","duration":"136.485985ms","start":"2025-11-29T08:32:07.397348Z","end":"2025-11-29T08:32:07.533834Z","steps":["trace[783164902] 'agreement among raft nodes before linearized reading'  (duration: 136.369078ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T08:32:07.533856Z","caller":"traceutil/trace.go:172","msg":"trace[1026548089] transaction","detail":"{read_only:false; response_revision:1602; number_of_response:1; }","duration":"183.11559ms","start":"2025-11-29T08:32:07.350728Z","end":"2025-11-29T08:32:07.533843Z","steps":["trace[1026548089] 'process raft request'  (duration: 182.925566ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T08:32:16.103300Z","caller":"traceutil/trace.go:172","msg":"trace[202859245] transaction","detail":"{read_only:false; response_revision:1635; number_of_response:1; }","duration":"227.846336ms","start":"2025-11-29T08:32:15.875441Z","end":"2025-11-29T08:32:16.103287Z","steps":["trace[202859245] 'process raft request'  (duration: 227.746812ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:34:19 up 5 min,  0 users,  load average: 0.91, 1.38, 0.69
	Linux addons-213983 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [9b1fbfd71ea3bb00ff935b93d74a76eb9ac33487269e862f6979303b331a0b34] <==
	E1129 08:30:26.225525       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.101.152:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.101.152:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.101.152:443: connect: connection refused" logger="UnhandledError"
	E1129 08:30:26.231972       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.101.152:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.101.152:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.101.152:443: connect: connection refused" logger="UnhandledError"
	I1129 08:30:26.305401       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1129 08:31:24.418289       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:33456: use of closed network connection
	E1129 08:31:24.607443       1 conn.go:339] Error on socket receive: read tcp 192.168.39.35:8443->192.168.39.1:33486: use of closed network connection
	I1129 08:31:33.820125       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.212.127"}
	I1129 08:31:49.557782       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1129 08:31:49.776391       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.197.27"}
	E1129 08:32:09.158205       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1129 08:32:16.875553       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1129 08:32:27.238253       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1129 08:32:38.015298       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1129 08:32:38.015385       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1129 08:32:38.044490       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1129 08:32:38.044578       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1129 08:32:38.057796       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1129 08:32:38.058001       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1129 08:32:38.072363       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1129 08:32:38.072414       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1129 08:32:38.217711       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1129 08:32:38.217820       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1129 08:32:39.058455       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1129 08:32:39.218997       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1129 08:32:39.234045       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1129 08:34:17.787898       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.52.186"}
	
	
	==> kube-controller-manager [78844f6484972079bbc7688afa8900169cd9fc313a923e73b9eb0371544c633b] <==
	I1129 08:32:44.510213       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1129 08:32:45.821258       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1129 08:32:45.822975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1129 08:32:47.399852       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1129 08:32:47.400741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1129 08:32:47.774507       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1129 08:32:47.775706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1129 08:32:55.872821       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1129 08:32:55.873965       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1129 08:32:56.165038       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1129 08:32:56.166133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1129 08:32:56.683706       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1129 08:32:56.684985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1129 08:33:10.949741       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1129 08:33:10.950842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1129 08:33:18.034857       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1129 08:33:18.036088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1129 08:33:21.228144       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1129 08:33:21.229396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1129 08:33:58.822833       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1129 08:33:58.823899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1129 08:34:00.423000       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1129 08:34:00.424272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1129 08:34:08.967019       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1129 08:34:08.968182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [cfcdece95bcaa0a43e1d67b2a9f46fb03b0d7344e00c67921261a8f9052f6ca9] <==
	I1129 08:29:47.452848       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 08:29:47.553455       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 08:29:47.554189       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.35"]
	E1129 08:29:47.557308       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 08:29:47.823247       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1129 08:29:47.823307       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1129 08:29:47.823335       1 server_linux.go:132] "Using iptables Proxier"
	I1129 08:29:47.874374       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 08:29:47.879055       1 server.go:527] "Version info" version="v1.34.1"
	I1129 08:29:47.879124       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 08:29:47.890189       1 config.go:200] "Starting service config controller"
	I1129 08:29:47.890217       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 08:29:47.890237       1 config.go:106] "Starting endpoint slice config controller"
	I1129 08:29:47.890240       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 08:29:47.890250       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 08:29:47.890253       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 08:29:47.896544       1 config.go:309] "Starting node config controller"
	I1129 08:29:47.896579       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 08:29:47.896586       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 08:29:47.991448       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 08:29:47.991489       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 08:29:47.991534       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e9de99a70df7744d8953823ac7c5cc23f2b0db870d180d548cd716b797710b0a] <==
	E1129 08:29:36.567819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 08:29:36.568005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 08:29:36.568197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 08:29:36.568547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 08:29:36.568713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 08:29:36.569107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 08:29:37.383537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 08:29:37.412121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 08:29:37.442882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1129 08:29:37.484213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 08:29:37.559351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 08:29:37.763092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 08:29:37.785774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 08:29:37.799597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 08:29:37.851262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 08:29:37.870088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 08:29:37.914017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 08:29:37.960274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 08:29:37.983239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1129 08:29:37.988498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 08:29:38.019780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 08:29:38.060074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 08:29:38.083222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 08:29:38.129294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1129 08:29:40.051880       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 08:32:41 addons-213983 kubelet[1488]: I1129 08:32:41.409039    1488 scope.go:117] "RemoveContainer" containerID="92d5f66a0467edb9b60d059d7ccbc7ec09a09bc9af439df5e3d13f02055bca83"
	Nov 29 08:32:41 addons-213983 kubelet[1488]: I1129 08:32:41.526410    1488 scope.go:117] "RemoveContainer" containerID="3570c82a9d72e5d3d29c141138d9e3155b80a5b22cd9eb9d0fc3e29c47fe4108"
	Nov 29 08:32:41 addons-213983 kubelet[1488]: E1129 08:32:41.527234    1488 kuberuntime_gc.go:151] "Failed to remove container" err="failed to get container status \"3570c82a9d72e5d3d29c141138d9e3155b80a5b22cd9eb9d0fc3e29c47fe4108\": rpc error: code = NotFound desc = could not find container \"3570c82a9d72e5d3d29c141138d9e3155b80a5b22cd9eb9d0fc3e29c47fe4108\": container with ID starting with 3570c82a9d72e5d3d29c141138d9e3155b80a5b22cd9eb9d0fc3e29c47fe4108 not found: ID does not exist" containerID="3570c82a9d72e5d3d29c141138d9e3155b80a5b22cd9eb9d0fc3e29c47fe4108"
	Nov 29 08:32:46 addons-213983 kubelet[1488]: I1129 08:32:46.385245    1488 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 08:32:49 addons-213983 kubelet[1488]: E1129 08:32:49.629563    1488 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764405169629128659  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:32:49 addons-213983 kubelet[1488]: E1129 08:32:49.629626    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764405169629128659  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:32:59 addons-213983 kubelet[1488]: E1129 08:32:59.633799    1488 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764405179633479665  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:32:59 addons-213983 kubelet[1488]: E1129 08:32:59.633830    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764405179633479665  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:33:09 addons-213983 kubelet[1488]: E1129 08:33:09.637351    1488 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764405189636778516  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:33:09 addons-213983 kubelet[1488]: E1129 08:33:09.637401    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764405189636778516  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:33:19 addons-213983 kubelet[1488]: E1129 08:33:19.640766    1488 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764405199640289568  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:33:19 addons-213983 kubelet[1488]: E1129 08:33:19.640790    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764405199640289568  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:33:29 addons-213983 kubelet[1488]: E1129 08:33:29.644045    1488 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764405209643589322  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:33:29 addons-213983 kubelet[1488]: E1129 08:33:29.644069    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764405209643589322  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:33:39 addons-213983 kubelet[1488]: E1129 08:33:39.647022    1488 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764405219646545899  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:33:39 addons-213983 kubelet[1488]: E1129 08:33:39.647056    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764405219646545899  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:33:43 addons-213983 kubelet[1488]: I1129 08:33:43.385273    1488 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-q6ggk" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 08:33:49 addons-213983 kubelet[1488]: E1129 08:33:49.649254    1488 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764405229648756372  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:33:49 addons-213983 kubelet[1488]: E1129 08:33:49.649340    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764405229648756372  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:33:59 addons-213983 kubelet[1488]: E1129 08:33:59.652910    1488 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764405239652350086  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:33:59 addons-213983 kubelet[1488]: E1129 08:33:59.652978    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764405239652350086  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:34:09 addons-213983 kubelet[1488]: E1129 08:34:09.656250    1488 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764405249655820909  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:34:09 addons-213983 kubelet[1488]: E1129 08:34:09.656275    1488 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764405249655820909  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588567}  inodes_used:{value:201}}"
	Nov 29 08:34:16 addons-213983 kubelet[1488]: I1129 08:34:16.383655    1488 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 29 08:34:17 addons-213983 kubelet[1488]: I1129 08:34:17.798908    1488 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npw9k\" (UniqueName: \"kubernetes.io/projected/04d6b2c1-f9d1-494b-8bbd-987acaaa9833-kube-api-access-npw9k\") pod \"hello-world-app-5d498dc89-ggxlt\" (UID: \"04d6b2c1-f9d1-494b-8bbd-987acaaa9833\") " pod="default/hello-world-app-5d498dc89-ggxlt"
	
	
	==> storage-provisioner [ec6f75a5c4aaa9fb8a464cfd62bd03fb6a2bffe85a33db52494aea5653c8306e] <==
	W1129 08:33:54.695313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:33:56.698561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:33:56.706908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:33:58.711488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:33:58.717020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:00.720805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:00.726111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:02.729519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:02.734599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:04.739494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:04.745568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:06.749168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:06.756156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:08.760051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:08.765151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:10.769137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:10.774609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:12.778575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:12.786282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:14.790334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:14.796070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:16.799988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:16.808129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:18.812465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1129 08:34:18.822169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-213983 -n addons-213983
helpers_test.go:269: (dbg) Run:  kubectl --context addons-213983 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-ggxlt ingress-nginx-admission-create-6pwjs ingress-nginx-admission-patch-sdt5n
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-213983 describe pod hello-world-app-5d498dc89-ggxlt ingress-nginx-admission-create-6pwjs ingress-nginx-admission-patch-sdt5n
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-213983 describe pod hello-world-app-5d498dc89-ggxlt ingress-nginx-admission-create-6pwjs ingress-nginx-admission-patch-sdt5n: exit status 1 (91.857208ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-ggxlt
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-213983/192.168.39.35
	Start Time:       Sat, 29 Nov 2025 08:34:17 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-npw9k (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-npw9k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-ggxlt to addons-213983
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6pwjs" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-sdt5n" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-213983 describe pod hello-world-app-5d498dc89-ggxlt ingress-nginx-admission-create-6pwjs ingress-nginx-admission-patch-sdt5n: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-213983 addons disable ingress-dns --alsologtostderr -v=1: (1.146537881s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-213983 addons disable ingress --alsologtostderr -v=1: (7.71261655s)
--- FAIL: TestAddons/parallel/Ingress (159.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180687 ssh pgrep buildkitd: exit status 1 (178.092565ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image build -t localhost/my-image:functional-180687 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-180687 image build -t localhost/my-image:functional-180687 testdata/build --alsologtostderr: (3.79345802s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-180687 image build -t localhost/my-image:functional-180687 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2866bf1f868
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-180687
--> 3da2c38dbe4
Successfully tagged localhost/my-image:functional-180687
3da2c38dbe419e75ae1b52851e886a84a6125f171603164983aa0fbfc8453a63
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-180687 image build -t localhost/my-image:functional-180687 testdata/build --alsologtostderr:
I1129 08:40:35.875036   16176 out.go:360] Setting OutFile to fd 1 ...
I1129 08:40:35.875173   16176 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:40:35.875186   16176 out.go:374] Setting ErrFile to fd 2...
I1129 08:40:35.875190   16176 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:40:35.875411   16176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
I1129 08:40:35.876029   16176 config.go:182] Loaded profile config "functional-180687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:40:35.876641   16176 config.go:182] Loaded profile config "functional-180687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:40:35.879701   16176 ssh_runner.go:195] Run: systemctl --version
I1129 08:40:35.883107   16176 main.go:143] libmachine: domain functional-180687 has defined MAC address 52:54:00:db:70:4b in network mk-functional-180687
I1129 08:40:35.883629   16176 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:db:70:4b", ip: ""} in network mk-functional-180687: {Iface:virbr1 ExpiryTime:2025-11-29 09:36:50 +0000 UTC Type:0 Mac:52:54:00:db:70:4b Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:functional-180687 Clientid:01:52:54:00:db:70:4b}
I1129 08:40:35.883657   16176 main.go:143] libmachine: domain functional-180687 has defined IP address 192.168.39.50 and MAC address 52:54:00:db:70:4b in network mk-functional-180687
I1129 08:40:35.883849   16176 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/functional-180687/id_rsa Username:docker}
I1129 08:40:35.969646   16176 build_images.go:162] Building image from path: /tmp/build.713949405.tar
I1129 08:40:35.969723   16176 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1129 08:40:35.985341   16176 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.713949405.tar
I1129 08:40:35.990892   16176 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.713949405.tar: stat -c "%s %y" /var/lib/minikube/build/build.713949405.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.713949405.tar': No such file or directory
I1129 08:40:35.990924   16176 ssh_runner.go:362] scp /tmp/build.713949405.tar --> /var/lib/minikube/build/build.713949405.tar (3072 bytes)
I1129 08:40:36.030064   16176 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.713949405
I1129 08:40:36.045737   16176 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.713949405 -xf /var/lib/minikube/build/build.713949405.tar
I1129 08:40:36.060256   16176 crio.go:315] Building image: /var/lib/minikube/build/build.713949405
I1129 08:40:36.060325   16176 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-180687 /var/lib/minikube/build/build.713949405 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1129 08:40:39.549915   16176 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-180687 /var/lib/minikube/build/build.713949405 --cgroup-manager=cgroupfs: (3.489553112s)
I1129 08:40:39.549993   16176 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.713949405
I1129 08:40:39.574870   16176 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.713949405.tar
I1129 08:40:39.596006   16176 build_images.go:218] Built localhost/my-image:functional-180687 from /tmp/build.713949405.tar
I1129 08:40:39.596041   16176 build_images.go:134] succeeded building to: functional-180687
I1129 08:40:39.596047   16176 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-180687 image ls: (2.24688388s)
functional_test.go:461: expected "localhost/my-image:functional-180687" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (3.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image rm kicbase/echo-server:functional-180687 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-180687 image rm kicbase/echo-server:functional-180687 --alsologtostderr: (3.155633925s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image ls
functional_test.go:418: expected "kicbase/echo-server:functional-180687" to be removed from minikube but still exists
--- FAIL: TestFunctional/parallel/ImageCommands/ImageRemove (3.39s)

                                                
                                    
x
+
TestPreload (153.63s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-668578 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1129 09:20:03.243197    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:21:14.290792    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-668578 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m36.877130411s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-668578 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-668578 image pull gcr.io/k8s-minikube/busybox: (3.95694956s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-668578
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-668578: (7.027217361s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-668578 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-668578 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (43.13262546s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-668578 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.1
	registry.k8s.io/kube-proxy:v1.34.1
	registry.k8s.io/kube-controller-manager:v1.34.1
	registry.k8s.io/kube-apiserver:v1.34.1
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-11-29 09:22:17.848881584 +0000 UTC m=+3230.235730508
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-668578 -n test-preload-668578
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-668578 logs -n 25
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-446803 ssh -n multinode-446803-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-446803     │ jenkins │ v1.37.0 │ 29 Nov 25 09:09 UTC │ 29 Nov 25 09:09 UTC │
	│ ssh     │ multinode-446803 ssh -n multinode-446803 sudo cat /home/docker/cp-test_multinode-446803-m03_multinode-446803.txt                                          │ multinode-446803     │ jenkins │ v1.37.0 │ 29 Nov 25 09:09 UTC │ 29 Nov 25 09:09 UTC │
	│ cp      │ multinode-446803 cp multinode-446803-m03:/home/docker/cp-test.txt multinode-446803-m02:/home/docker/cp-test_multinode-446803-m03_multinode-446803-m02.txt │ multinode-446803     │ jenkins │ v1.37.0 │ 29 Nov 25 09:09 UTC │ 29 Nov 25 09:09 UTC │
	│ ssh     │ multinode-446803 ssh -n multinode-446803-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-446803     │ jenkins │ v1.37.0 │ 29 Nov 25 09:09 UTC │ 29 Nov 25 09:09 UTC │
	│ ssh     │ multinode-446803 ssh -n multinode-446803-m02 sudo cat /home/docker/cp-test_multinode-446803-m03_multinode-446803-m02.txt                                  │ multinode-446803     │ jenkins │ v1.37.0 │ 29 Nov 25 09:09 UTC │ 29 Nov 25 09:09 UTC │
	│ node    │ multinode-446803 node stop m03                                                                                                                            │ multinode-446803     │ jenkins │ v1.37.0 │ 29 Nov 25 09:09 UTC │ 29 Nov 25 09:09 UTC │
	│ node    │ multinode-446803 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-446803     │ jenkins │ v1.37.0 │ 29 Nov 25 09:09 UTC │ 29 Nov 25 09:09 UTC │
	│ node    │ list -p multinode-446803                                                                                                                                  │ multinode-446803     │ jenkins │ v1.37.0 │ 29 Nov 25 09:09 UTC │                     │
	│ stop    │ -p multinode-446803                                                                                                                                       │ multinode-446803     │ jenkins │ v1.37.0 │ 29 Nov 25 09:09 UTC │ 29 Nov 25 09:12 UTC │
	│ start   │ -p multinode-446803 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-446803     │ jenkins │ v1.37.0 │ 29 Nov 25 09:12 UTC │ 29 Nov 25 09:14 UTC │
	│ node    │ list -p multinode-446803                                                                                                                                  │ multinode-446803     │ jenkins │ v1.37.0 │ 29 Nov 25 09:14 UTC │                     │
	│ node    │ multinode-446803 node delete m03                                                                                                                          │ multinode-446803     │ jenkins │ v1.37.0 │ 29 Nov 25 09:14 UTC │ 29 Nov 25 09:14 UTC │
	│ stop    │ multinode-446803 stop                                                                                                                                     │ multinode-446803     │ jenkins │ v1.37.0 │ 29 Nov 25 09:14 UTC │ 29 Nov 25 09:17 UTC │
	│ start   │ -p multinode-446803 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-446803     │ jenkins │ v1.37.0 │ 29 Nov 25 09:17 UTC │ 29 Nov 25 09:19 UTC │
	│ node    │ list -p multinode-446803                                                                                                                                  │ multinode-446803     │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │                     │
	│ start   │ -p multinode-446803-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-446803-m02 │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │                     │
	│ start   │ -p multinode-446803-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-446803-m03 │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:19 UTC │
	│ node    │ add -p multinode-446803                                                                                                                                   │ multinode-446803     │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │                     │
	│ delete  │ -p multinode-446803-m03                                                                                                                                   │ multinode-446803-m03 │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:19 UTC │
	│ delete  │ -p multinode-446803                                                                                                                                       │ multinode-446803     │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:19 UTC │
	│ start   │ -p test-preload-668578 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-668578  │ jenkins │ v1.37.0 │ 29 Nov 25 09:19 UTC │ 29 Nov 25 09:21 UTC │
	│ image   │ test-preload-668578 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-668578  │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ stop    │ -p test-preload-668578                                                                                                                                    │ test-preload-668578  │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:21 UTC │
	│ start   │ -p test-preload-668578 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-668578  │ jenkins │ v1.37.0 │ 29 Nov 25 09:21 UTC │ 29 Nov 25 09:22 UTC │
	│ image   │ test-preload-668578 image list                                                                                                                            │ test-preload-668578  │ jenkins │ v1.37.0 │ 29 Nov 25 09:22 UTC │ 29 Nov 25 09:22 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:21:34
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:21:34.589762   32876 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:21:34.590060   32876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:21:34.590072   32876 out.go:374] Setting ErrFile to fd 2...
	I1129 09:21:34.590077   32876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:21:34.590252   32876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
	I1129 09:21:34.590681   32876 out.go:368] Setting JSON to false
	I1129 09:21:34.591650   32876 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3839,"bootTime":1764404256,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:21:34.591705   32876 start.go:143] virtualization: kvm guest
	I1129 09:21:34.594553   32876 out.go:179] * [test-preload-668578] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:21:34.595957   32876 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:21:34.596007   32876 notify.go:221] Checking for updates...
	I1129 09:21:34.598011   32876 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:21:34.599327   32876 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig
	I1129 09:21:34.600390   32876 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube
	I1129 09:21:34.601502   32876 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:21:34.602531   32876 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:21:34.604128   32876 config.go:182] Loaded profile config "test-preload-668578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:21:34.604584   32876 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:21:34.638065   32876 out.go:179] * Using the kvm2 driver based on existing profile
	I1129 09:21:34.639260   32876 start.go:309] selected driver: kvm2
	I1129 09:21:34.639274   32876 start.go:927] validating driver "kvm2" against &{Name:test-preload-668578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.1 ClusterName:test-preload-668578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:21:34.639366   32876 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:21:34.640314   32876 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:21:34.640345   32876 cni.go:84] Creating CNI manager for ""
	I1129 09:21:34.640394   32876 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1129 09:21:34.640449   32876 start.go:353] cluster config:
	{Name:test-preload-668578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:test-preload-668578 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:21:34.640535   32876 iso.go:125] acquiring lock: {Name:mk0184b92a126aea44cd27d4836c247b817b0333 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:21:34.642054   32876 out.go:179] * Starting "test-preload-668578" primary control-plane node in "test-preload-668578" cluster
	I1129 09:21:34.643200   32876 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:21:34.643241   32876 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:21:34.643252   32876 cache.go:65] Caching tarball of preloaded images
	I1129 09:21:34.643335   32876 preload.go:238] Found /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:21:34.643347   32876 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:21:34.643431   32876 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/test-preload-668578/config.json ...
	I1129 09:21:34.643623   32876 start.go:360] acquireMachinesLock for test-preload-668578: {Name:mke0bd376b87e419ebada00803bbcbb9230316d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1129 09:21:34.643665   32876 start.go:364] duration metric: took 24.953µs to acquireMachinesLock for "test-preload-668578"
	I1129 09:21:34.643678   32876 start.go:96] Skipping create...Using existing machine configuration
	I1129 09:21:34.643682   32876 fix.go:54] fixHost starting: 
	I1129 09:21:34.645533   32876 fix.go:112] recreateIfNeeded on test-preload-668578: state=Stopped err=<nil>
	W1129 09:21:34.645556   32876 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 09:21:34.647761   32876 out.go:252] * Restarting existing kvm2 VM for "test-preload-668578" ...
	I1129 09:21:34.647801   32876 main.go:143] libmachine: starting domain...
	I1129 09:21:34.647812   32876 main.go:143] libmachine: ensuring networks are active...
	I1129 09:21:34.648687   32876 main.go:143] libmachine: Ensuring network default is active
	I1129 09:21:34.649170   32876 main.go:143] libmachine: Ensuring network mk-test-preload-668578 is active
	I1129 09:21:34.649694   32876 main.go:143] libmachine: getting domain XML...
	I1129 09:21:34.650951   32876 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-668578</name>
	  <uuid>cf14fbf5-b44c-4932-b6a5-e28de54952ee</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22000-5651/.minikube/machines/test-preload-668578/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22000-5651/.minikube/machines/test-preload-668578/test-preload-668578.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:ba:cc:f4'/>
	      <source network='mk-test-preload-668578'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:59:6d:77'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1129 09:21:35.908604   32876 main.go:143] libmachine: waiting for domain to start...
	I1129 09:21:35.909939   32876 main.go:143] libmachine: domain is now running
	I1129 09:21:35.909955   32876 main.go:143] libmachine: waiting for IP...
	I1129 09:21:35.910723   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:35.911202   32876 main.go:143] libmachine: domain test-preload-668578 has current primary IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:35.911214   32876 main.go:143] libmachine: found domain IP: 192.168.39.242
	I1129 09:21:35.911218   32876 main.go:143] libmachine: reserving static IP address...
	I1129 09:21:35.911607   32876 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-668578", mac: "52:54:00:ba:cc:f4", ip: "192.168.39.242"} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:20:01 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:21:35.911631   32876 main.go:143] libmachine: skip adding static IP to network mk-test-preload-668578 - found existing host DHCP lease matching {name: "test-preload-668578", mac: "52:54:00:ba:cc:f4", ip: "192.168.39.242"}
	I1129 09:21:35.911641   32876 main.go:143] libmachine: reserved static IP address 192.168.39.242 for domain test-preload-668578
	I1129 09:21:35.911646   32876 main.go:143] libmachine: waiting for SSH...
	I1129 09:21:35.911651   32876 main.go:143] libmachine: Getting to WaitForSSH function...
	I1129 09:21:35.913764   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:35.914191   32876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:cc:f4", ip: ""} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:20:01 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:21:35.914211   32876 main.go:143] libmachine: domain test-preload-668578 has defined IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:35.914364   32876 main.go:143] libmachine: Using SSH client type: native
	I1129 09:21:35.914566   32876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I1129 09:21:35.914576   32876 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1129 09:21:39.000111   32876 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.242:22: connect: no route to host
	I1129 09:21:45.080136   32876 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.242:22: connect: no route to host
	I1129 09:21:48.192499   32876 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:21:48.195923   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:48.196341   32876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:cc:f4", ip: ""} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:21:45 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:21:48.196371   32876 main.go:143] libmachine: domain test-preload-668578 has defined IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:48.196594   32876 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/test-preload-668578/config.json ...
	I1129 09:21:48.196866   32876 machine.go:94] provisionDockerMachine start ...
	I1129 09:21:48.199251   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:48.199602   32876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:cc:f4", ip: ""} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:21:45 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:21:48.199624   32876 main.go:143] libmachine: domain test-preload-668578 has defined IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:48.199785   32876 main.go:143] libmachine: Using SSH client type: native
	I1129 09:21:48.200039   32876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I1129 09:21:48.200050   32876 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:21:48.304475   32876 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1129 09:21:48.304515   32876 buildroot.go:166] provisioning hostname "test-preload-668578"
	I1129 09:21:48.307418   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:48.307847   32876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:cc:f4", ip: ""} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:21:45 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:21:48.307874   32876 main.go:143] libmachine: domain test-preload-668578 has defined IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:48.308035   32876 main.go:143] libmachine: Using SSH client type: native
	I1129 09:21:48.308236   32876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I1129 09:21:48.308247   32876 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-668578 && echo "test-preload-668578" | sudo tee /etc/hostname
	I1129 09:21:48.430115   32876 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-668578
	
	I1129 09:21:48.432694   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:48.433071   32876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:cc:f4", ip: ""} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:21:45 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:21:48.433099   32876 main.go:143] libmachine: domain test-preload-668578 has defined IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:48.433246   32876 main.go:143] libmachine: Using SSH client type: native
	I1129 09:21:48.433473   32876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I1129 09:21:48.433496   32876 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-668578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-668578/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-668578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:21:48.546966   32876 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:21:48.547022   32876 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5651/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5651/.minikube}
	I1129 09:21:48.547049   32876 buildroot.go:174] setting up certificates
	I1129 09:21:48.547061   32876 provision.go:84] configureAuth start
	I1129 09:21:48.549983   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:48.550306   32876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:cc:f4", ip: ""} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:21:45 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:21:48.550339   32876 main.go:143] libmachine: domain test-preload-668578 has defined IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:48.552608   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:48.552952   32876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:cc:f4", ip: ""} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:21:45 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:21:48.552982   32876 main.go:143] libmachine: domain test-preload-668578 has defined IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:48.553099   32876 provision.go:143] copyHostCerts
	I1129 09:21:48.553149   32876 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5651/.minikube/ca.pem, removing ...
	I1129 09:21:48.553159   32876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5651/.minikube/ca.pem
	I1129 09:21:48.553222   32876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5651/.minikube/ca.pem (1082 bytes)
	I1129 09:21:48.553306   32876 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5651/.minikube/cert.pem, removing ...
	I1129 09:21:48.553313   32876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5651/.minikube/cert.pem
	I1129 09:21:48.553339   32876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5651/.minikube/cert.pem (1123 bytes)
	I1129 09:21:48.553396   32876 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5651/.minikube/key.pem, removing ...
	I1129 09:21:48.553403   32876 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5651/.minikube/key.pem
	I1129 09:21:48.553425   32876 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5651/.minikube/key.pem (1679 bytes)
	I1129 09:21:48.553469   32876 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5651/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca-key.pem org=jenkins.test-preload-668578 san=[127.0.0.1 192.168.39.242 localhost minikube test-preload-668578]
	I1129 09:21:48.589095   32876 provision.go:177] copyRemoteCerts
	I1129 09:21:48.589150   32876 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:21:48.591556   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:48.591865   32876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:cc:f4", ip: ""} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:21:45 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:21:48.591887   32876 main.go:143] libmachine: domain test-preload-668578 has defined IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:48.591994   32876 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/test-preload-668578/id_rsa Username:docker}
	I1129 09:21:48.675145   32876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1129 09:21:48.703993   32876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:21:48.733141   32876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 09:21:48.762016   32876 provision.go:87] duration metric: took 214.941762ms to configureAuth
	I1129 09:21:48.762044   32876 buildroot.go:189] setting minikube options for container-runtime
	I1129 09:21:48.762238   32876 config.go:182] Loaded profile config "test-preload-668578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:21:48.765173   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:48.765629   32876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:cc:f4", ip: ""} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:21:45 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:21:48.765653   32876 main.go:143] libmachine: domain test-preload-668578 has defined IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:48.765886   32876 main.go:143] libmachine: Using SSH client type: native
	I1129 09:21:48.766087   32876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I1129 09:21:48.766103   32876 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:21:49.003714   32876 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:21:49.003734   32876 machine.go:97] duration metric: took 806.855001ms to provisionDockerMachine
	I1129 09:21:49.003748   32876 start.go:293] postStartSetup for "test-preload-668578" (driver="kvm2")
	I1129 09:21:49.003760   32876 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:21:49.003857   32876 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:21:49.006197   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:49.006610   32876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:cc:f4", ip: ""} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:21:45 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:21:49.006642   32876 main.go:143] libmachine: domain test-preload-668578 has defined IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:49.006810   32876 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/test-preload-668578/id_rsa Username:docker}
	I1129 09:21:49.095859   32876 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:21:49.100626   32876 info.go:137] Remote host: Buildroot 2025.02
	I1129 09:21:49.100650   32876 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5651/.minikube/addons for local assets ...
	I1129 09:21:49.100708   32876 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5651/.minikube/files for local assets ...
	I1129 09:21:49.100798   32876 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5651/.minikube/files/etc/ssl/certs/96132.pem -> 96132.pem in /etc/ssl/certs
	I1129 09:21:49.100910   32876 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:21:49.115957   32876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/files/etc/ssl/certs/96132.pem --> /etc/ssl/certs/96132.pem (1708 bytes)
	I1129 09:21:49.151374   32876 start.go:296] duration metric: took 147.610418ms for postStartSetup
	I1129 09:21:49.151415   32876 fix.go:56] duration metric: took 14.507732046s for fixHost
	I1129 09:21:49.154134   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:49.154498   32876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:cc:f4", ip: ""} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:21:45 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:21:49.154527   32876 main.go:143] libmachine: domain test-preload-668578 has defined IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:49.154712   32876 main.go:143] libmachine: Using SSH client type: native
	I1129 09:21:49.154960   32876 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.39.242 22 <nil> <nil>}
	I1129 09:21:49.154972   32876 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1129 09:21:49.260326   32876 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764408109.216792497
	
	I1129 09:21:49.260350   32876 fix.go:216] guest clock: 1764408109.216792497
	I1129 09:21:49.260360   32876 fix.go:229] Guest: 2025-11-29 09:21:49.216792497 +0000 UTC Remote: 2025-11-29 09:21:49.151418563 +0000 UTC m=+14.610258344 (delta=65.373934ms)
	I1129 09:21:49.260381   32876 fix.go:200] guest clock delta is within tolerance: 65.373934ms
	I1129 09:21:49.260387   32876 start.go:83] releasing machines lock for "test-preload-668578", held for 14.616713344s
	I1129 09:21:49.263138   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:49.263518   32876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:cc:f4", ip: ""} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:21:45 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:21:49.263541   32876 main.go:143] libmachine: domain test-preload-668578 has defined IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:49.264130   32876 ssh_runner.go:195] Run: cat /version.json
	I1129 09:21:49.264212   32876 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:21:49.267271   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:49.267451   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:49.267951   32876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:cc:f4", ip: ""} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:21:45 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:21:49.267983   32876 main.go:143] libmachine: domain test-preload-668578 has defined IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:49.268033   32876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:cc:f4", ip: ""} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:21:45 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:21:49.268057   32876 main.go:143] libmachine: domain test-preload-668578 has defined IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:49.268210   32876 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/test-preload-668578/id_rsa Username:docker}
	I1129 09:21:49.268381   32876 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/test-preload-668578/id_rsa Username:docker}
	I1129 09:21:49.345695   32876 ssh_runner.go:195] Run: systemctl --version
	I1129 09:21:49.380609   32876 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:21:49.531367   32876 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:21:49.538138   32876 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:21:49.538206   32876 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:21:49.557621   32876 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:21:49.557646   32876 start.go:496] detecting cgroup driver to use...
	I1129 09:21:49.557701   32876 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:21:49.577801   32876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:21:49.594710   32876 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:21:49.594784   32876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:21:49.612742   32876 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:21:49.629976   32876 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:21:49.778758   32876 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:21:49.993060   32876 docker.go:234] disabling docker service ...
	I1129 09:21:49.993135   32876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:21:50.010303   32876 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:21:50.026620   32876 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:21:50.180504   32876 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:21:50.318297   32876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:21:50.334957   32876 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:21:50.358675   32876 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:21:50.358743   32876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:21:50.371811   32876 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 09:21:50.371893   32876 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:21:50.384699   32876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:21:50.397579   32876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:21:50.410085   32876 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:21:50.423634   32876 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:21:50.437485   32876 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:21:50.458844   32876 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:21:50.471866   32876 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:21:50.483457   32876 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1129 09:21:50.483533   32876 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1129 09:21:50.504536   32876 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:21:50.517109   32876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:21:50.662792   32876 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:21:50.777776   32876 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:21:50.777901   32876 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:21:50.782927   32876 start.go:564] Will wait 60s for crictl version
	I1129 09:21:50.782977   32876 ssh_runner.go:195] Run: which crictl
	I1129 09:21:50.787224   32876 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1129 09:21:50.821211   32876 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1129 09:21:50.821301   32876 ssh_runner.go:195] Run: crio --version
	I1129 09:21:50.852323   32876 ssh_runner.go:195] Run: crio --version
	I1129 09:21:50.884627   32876 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1129 09:21:50.888513   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:50.888873   32876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:cc:f4", ip: ""} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:21:45 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:21:50.888900   32876 main.go:143] libmachine: domain test-preload-668578 has defined IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:21:50.889071   32876 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1129 09:21:50.893374   32876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:21:50.907976   32876 kubeadm.go:884] updating cluster {Name:test-preload-668578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.1 ClusterName:test-preload-668578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:21:50.908153   32876 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:21:50.908216   32876 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:21:50.942925   32876 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1129 09:21:50.943011   32876 ssh_runner.go:195] Run: which lz4
	I1129 09:21:50.947340   32876 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1129 09:21:50.952135   32876 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1129 09:21:50.952167   32876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1129 09:21:52.408435   32876 crio.go:462] duration metric: took 1.461124808s to copy over tarball
	I1129 09:21:52.408555   32876 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1129 09:21:54.017613   32876 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.609029455s)
	I1129 09:21:54.017638   32876 crio.go:469] duration metric: took 1.609161982s to extract the tarball
	I1129 09:21:54.017645   32876 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1129 09:21:54.058132   32876 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:21:54.097786   32876 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:21:54.097810   32876 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:21:54.097817   32876 kubeadm.go:935] updating node { 192.168.39.242 8443 v1.34.1 crio true true} ...
	I1129 09:21:54.097933   32876 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-668578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:test-preload-668578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:21:54.097995   32876 ssh_runner.go:195] Run: crio config
	I1129 09:21:54.142712   32876 cni.go:84] Creating CNI manager for ""
	I1129 09:21:54.142734   32876 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1129 09:21:54.142749   32876 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:21:54.142775   32876 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.242 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-668578 NodeName:test-preload-668578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:21:54.142911   32876 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.242
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-668578"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.242"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.242"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:21:54.142974   32876 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:21:54.155140   32876 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:21:54.155208   32876 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:21:54.166601   32876 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1129 09:21:54.186507   32876 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:21:54.206969   32876 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1129 09:21:54.227227   32876 ssh_runner.go:195] Run: grep 192.168.39.242	control-plane.minikube.internal$ /etc/hosts
	I1129 09:21:54.231454   32876 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.242	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:21:54.246288   32876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:21:54.394058   32876 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:21:54.424332   32876 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/test-preload-668578 for IP: 192.168.39.242
	I1129 09:21:54.424356   32876 certs.go:195] generating shared ca certs ...
	I1129 09:21:54.424371   32876 certs.go:227] acquiring lock for ca certs: {Name:mk263acc791d5a2c77504c81548ce554781ff9eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:21:54.424540   32876 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5651/.minikube/ca.key
	I1129 09:21:54.424600   32876 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.key
	I1129 09:21:54.424612   32876 certs.go:257] generating profile certs ...
	I1129 09:21:54.424733   32876 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/test-preload-668578/client.key
	I1129 09:21:54.424823   32876 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/test-preload-668578/apiserver.key.10625ba7
	I1129 09:21:54.424945   32876 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/test-preload-668578/proxy-client.key
	I1129 09:21:54.425101   32876 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/9613.pem (1338 bytes)
	W1129 09:21:54.425147   32876 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5651/.minikube/certs/9613_empty.pem, impossibly tiny 0 bytes
	I1129 09:21:54.425161   32876 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:21:54.425305   32876 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem (1082 bytes)
	I1129 09:21:54.425363   32876 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:21:54.425398   32876 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/key.pem (1679 bytes)
	I1129 09:21:54.425500   32876 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/files/etc/ssl/certs/96132.pem (1708 bytes)
	I1129 09:21:54.426263   32876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:21:54.461815   32876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:21:54.494705   32876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:21:54.524067   32876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:21:54.554099   32876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/test-preload-668578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1129 09:21:54.585066   32876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/test-preload-668578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:21:54.617597   32876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/test-preload-668578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:21:54.648921   32876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/test-preload-668578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:21:54.682273   32876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/files/etc/ssl/certs/96132.pem --> /usr/share/ca-certificates/96132.pem (1708 bytes)
	I1129 09:21:54.712529   32876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:21:54.740594   32876 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/certs/9613.pem --> /usr/share/ca-certificates/9613.pem (1338 bytes)
	I1129 09:21:54.768642   32876 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:21:54.787716   32876 ssh_runner.go:195] Run: openssl version
	I1129 09:21:54.794216   32876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96132.pem && ln -fs /usr/share/ca-certificates/96132.pem /etc/ssl/certs/96132.pem"
	I1129 09:21:54.806814   32876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96132.pem
	I1129 09:21:54.811706   32876 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:36 /usr/share/ca-certificates/96132.pem
	I1129 09:21:54.811757   32876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96132.pem
	I1129 09:21:54.818999   32876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96132.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:21:54.832319   32876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:21:54.846115   32876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:21:54.851631   32876 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:21:54.851705   32876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:21:54.859085   32876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:21:54.872544   32876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9613.pem && ln -fs /usr/share/ca-certificates/9613.pem /etc/ssl/certs/9613.pem"
	I1129 09:21:54.885340   32876 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9613.pem
	I1129 09:21:54.890067   32876 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:36 /usr/share/ca-certificates/9613.pem
	I1129 09:21:54.890116   32876 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9613.pem
	I1129 09:21:54.896852   32876 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9613.pem /etc/ssl/certs/51391683.0"
	I1129 09:21:54.909441   32876 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:21:54.914640   32876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:21:54.921793   32876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:21:54.928663   32876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:21:54.935866   32876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:21:54.942750   32876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:21:54.949933   32876 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:21:54.957158   32876 kubeadm.go:401] StartCluster: {Name:test-preload-668578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.1 ClusterName:test-preload-668578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:21:54.957269   32876 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:21:54.957317   32876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:21:54.991105   32876 cri.go:89] found id: ""
	I1129 09:21:54.991176   32876 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:21:55.005447   32876 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1129 09:21:55.005466   32876 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1129 09:21:55.005517   32876 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1129 09:21:55.019150   32876 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:21:55.019607   32876 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-668578" does not appear in /home/jenkins/minikube-integration/22000-5651/kubeconfig
	I1129 09:21:55.019705   32876 kubeconfig.go:62] /home/jenkins/minikube-integration/22000-5651/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-668578" cluster setting kubeconfig missing "test-preload-668578" context setting]
	I1129 09:21:55.019983   32876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/kubeconfig: {Name:mk06369260b11b7542906282ff812e026bce8478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:21:55.020477   32876 kapi.go:59] client config for test-preload-668578: &rest.Config{Host:"https://192.168.39.242:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22000-5651/.minikube/profiles/test-preload-668578/client.crt", KeyFile:"/home/jenkins/minikube-integration/22000-5651/.minikube/profiles/test-preload-668578/client.key", CAFile:"/home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1129 09:21:55.020922   32876 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1129 09:21:55.020936   32876 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1129 09:21:55.020941   32876 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1129 09:21:55.020946   32876 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1129 09:21:55.020949   32876 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1129 09:21:55.021278   32876 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1129 09:21:55.032686   32876 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.242
	I1129 09:21:55.032724   32876 kubeadm.go:1161] stopping kube-system containers ...
	I1129 09:21:55.032735   32876 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1129 09:21:55.032797   32876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:21:55.065315   32876 cri.go:89] found id: ""
	I1129 09:21:55.065401   32876 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1129 09:21:55.084636   32876 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:21:55.096128   32876 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:21:55.096150   32876 kubeadm.go:158] found existing configuration files:
	
	I1129 09:21:55.096202   32876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:21:55.107385   32876 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:21:55.107446   32876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:21:55.118687   32876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:21:55.128898   32876 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:21:55.128950   32876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:21:55.140255   32876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:21:55.150557   32876 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:21:55.150619   32876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:21:55.161857   32876 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:21:55.171871   32876 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:21:55.171928   32876 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:21:55.183012   32876 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:21:55.194582   32876 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1129 09:21:55.247952   32876 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1129 09:21:57.085909   32876 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.837918259s)
	I1129 09:21:57.085985   32876 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1129 09:21:57.349891   32876 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1129 09:21:57.416106   32876 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1129 09:21:57.499165   32876 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:21:57.499260   32876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:21:57.999542   32876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:21:58.500085   32876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:21:59.000035   32876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:21:59.043373   32876 api_server.go:72] duration metric: took 1.544224682s to wait for apiserver process to appear ...
	I1129 09:21:59.043400   32876 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:21:59.043423   32876 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I1129 09:22:01.353320   32876 api_server.go:279] https://192.168.39.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1129 09:22:01.353352   32876 api_server.go:103] status: https://192.168.39.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1129 09:22:01.353366   32876 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I1129 09:22:01.383898   32876 api_server.go:279] https://192.168.39.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1129 09:22:01.383928   32876 api_server.go:103] status: https://192.168.39.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1129 09:22:01.544365   32876 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I1129 09:22:01.550142   32876 api_server.go:279] https://192.168.39.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:22:01.550171   32876 api_server.go:103] status: https://192.168.39.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:22:02.044538   32876 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I1129 09:22:02.050142   32876 api_server.go:279] https://192.168.39.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:22:02.050171   32876 api_server.go:103] status: https://192.168.39.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:22:02.543813   32876 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I1129 09:22:02.561283   32876 api_server.go:279] https://192.168.39.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:22:02.561306   32876 api_server.go:103] status: https://192.168.39.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:22:03.043951   32876 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I1129 09:22:03.048589   32876 api_server.go:279] https://192.168.39.242:8443/healthz returned 200:
	ok
	I1129 09:22:03.056230   32876 api_server.go:141] control plane version: v1.34.1
	I1129 09:22:03.056272   32876 api_server.go:131] duration metric: took 4.012863536s to wait for apiserver health ...
	I1129 09:22:03.056283   32876 cni.go:84] Creating CNI manager for ""
	I1129 09:22:03.056292   32876 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1129 09:22:03.058147   32876 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1129 09:22:03.059392   32876 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1129 09:22:03.072293   32876 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1129 09:22:03.095323   32876 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:22:03.101660   32876 system_pods.go:59] 7 kube-system pods found
	I1129 09:22:03.101717   32876 system_pods.go:61] "coredns-66bc5c9577-t64gx" [42ed60ce-029d-48c0-88a6-4578d1d051d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:22:03.101730   32876 system_pods.go:61] "etcd-test-preload-668578" [c2d07ecc-6ac0-428e-85d5-910c35df7156] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:22:03.101743   32876 system_pods.go:61] "kube-apiserver-test-preload-668578" [8c3c124f-18d8-4fd3-a7b0-59a5b4c07f60] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:22:03.101752   32876 system_pods.go:61] "kube-controller-manager-test-preload-668578" [bfab17ac-aaf0-4dea-bfad-88dbafaeb13b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:22:03.101761   32876 system_pods.go:61] "kube-proxy-c5ns7" [3637234c-ace6-45c0-8852-b34bfe80b9a2] Running
	I1129 09:22:03.101772   32876 system_pods.go:61] "kube-scheduler-test-preload-668578" [32b203a7-efdc-4b9e-b56b-b31f89b49bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:22:03.101777   32876 system_pods.go:61] "storage-provisioner" [a6e567cb-063f-4754-a476-dec29c7dafc7] Running
	I1129 09:22:03.101787   32876 system_pods.go:74] duration metric: took 6.436303ms to wait for pod list to return data ...
	I1129 09:22:03.101797   32876 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:22:03.106480   32876 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1129 09:22:03.106528   32876 node_conditions.go:123] node cpu capacity is 2
	I1129 09:22:03.106540   32876 node_conditions.go:105] duration metric: took 4.737641ms to run NodePressure ...
	I1129 09:22:03.106596   32876 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1129 09:22:03.369439   32876 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1129 09:22:03.373017   32876 kubeadm.go:744] kubelet initialised
	I1129 09:22:03.373039   32876 kubeadm.go:745] duration metric: took 3.56894ms waiting for restarted kubelet to initialise ...
	I1129 09:22:03.373055   32876 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:22:03.397273   32876 ops.go:34] apiserver oom_adj: -16
	I1129 09:22:03.397293   32876 kubeadm.go:602] duration metric: took 8.391821477s to restartPrimaryControlPlane
	I1129 09:22:03.397302   32876 kubeadm.go:403] duration metric: took 8.440157716s to StartCluster
	I1129 09:22:03.397317   32876 settings.go:142] acquiring lock: {Name:mkb0bfd7d63d07772bc8411985c986880254a5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:22:03.397404   32876 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5651/kubeconfig
	I1129 09:22:03.398063   32876 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/kubeconfig: {Name:mk06369260b11b7542906282ff812e026bce8478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:22:03.398333   32876 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.242 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:22:03.398410   32876 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:22:03.398497   32876 addons.go:70] Setting storage-provisioner=true in profile "test-preload-668578"
	I1129 09:22:03.398515   32876 addons.go:239] Setting addon storage-provisioner=true in "test-preload-668578"
	W1129 09:22:03.398524   32876 addons.go:248] addon storage-provisioner should already be in state true
	I1129 09:22:03.398523   32876 addons.go:70] Setting default-storageclass=true in profile "test-preload-668578"
	I1129 09:22:03.398551   32876 host.go:66] Checking if "test-preload-668578" exists ...
	I1129 09:22:03.398551   32876 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-668578"
	I1129 09:22:03.398603   32876 config.go:182] Loaded profile config "test-preload-668578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:22:03.400152   32876 out.go:179] * Verifying Kubernetes components...
	I1129 09:22:03.401228   32876 kapi.go:59] client config for test-preload-668578: &rest.Config{Host:"https://192.168.39.242:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22000-5651/.minikube/profiles/test-preload-668578/client.crt", KeyFile:"/home/jenkins/minikube-integration/22000-5651/.minikube/profiles/test-preload-668578/client.key", CAFile:"/home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1129 09:22:03.401456   32876 addons.go:239] Setting addon default-storageclass=true in "test-preload-668578"
	W1129 09:22:03.401470   32876 addons.go:248] addon default-storageclass should already be in state true
	I1129 09:22:03.401493   32876 host.go:66] Checking if "test-preload-668578" exists ...
	I1129 09:22:03.401511   32876 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1129 09:22:03.401557   32876 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:22:03.402945   32876 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:22:03.402972   32876 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1129 09:22:03.403199   32876 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1129 09:22:03.403219   32876 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1129 09:22:03.406281   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:22:03.406434   32876 main.go:143] libmachine: domain test-preload-668578 has defined MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:22:03.406783   32876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:cc:f4", ip: ""} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:21:45 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:22:03.406822   32876 main.go:143] libmachine: domain test-preload-668578 has defined IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:22:03.406840   32876 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ba:cc:f4", ip: ""} in network mk-test-preload-668578: {Iface:virbr1 ExpiryTime:2025-11-29 10:21:45 +0000 UTC Type:0 Mac:52:54:00:ba:cc:f4 Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:test-preload-668578 Clientid:01:52:54:00:ba:cc:f4}
	I1129 09:22:03.406963   32876 main.go:143] libmachine: domain test-preload-668578 has defined IP address 192.168.39.242 and MAC address 52:54:00:ba:cc:f4 in network mk-test-preload-668578
	I1129 09:22:03.407019   32876 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/test-preload-668578/id_rsa Username:docker}
	I1129 09:22:03.407256   32876 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/test-preload-668578/id_rsa Username:docker}
	I1129 09:22:03.641328   32876 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:22:03.660817   32876 node_ready.go:35] waiting up to 6m0s for node "test-preload-668578" to be "Ready" ...
	I1129 09:22:03.665787   32876 node_ready.go:49] node "test-preload-668578" is "Ready"
	I1129 09:22:03.665841   32876 node_ready.go:38] duration metric: took 4.95299ms for node "test-preload-668578" to be "Ready" ...
	I1129 09:22:03.665861   32876 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:22:03.665926   32876 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:22:03.687877   32876 api_server.go:72] duration metric: took 289.50259ms to wait for apiserver process to appear ...
	I1129 09:22:03.687916   32876 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:22:03.687944   32876 api_server.go:253] Checking apiserver healthz at https://192.168.39.242:8443/healthz ...
	I1129 09:22:03.695524   32876 api_server.go:279] https://192.168.39.242:8443/healthz returned 200:
	ok
	I1129 09:22:03.696727   32876 api_server.go:141] control plane version: v1.34.1
	I1129 09:22:03.696757   32876 api_server.go:131] duration metric: took 8.832022ms to wait for apiserver health ...
	I1129 09:22:03.696769   32876 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:22:03.700416   32876 system_pods.go:59] 7 kube-system pods found
	I1129 09:22:03.700462   32876 system_pods.go:61] "coredns-66bc5c9577-t64gx" [42ed60ce-029d-48c0-88a6-4578d1d051d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:22:03.700469   32876 system_pods.go:61] "etcd-test-preload-668578" [c2d07ecc-6ac0-428e-85d5-910c35df7156] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:22:03.700476   32876 system_pods.go:61] "kube-apiserver-test-preload-668578" [8c3c124f-18d8-4fd3-a7b0-59a5b4c07f60] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:22:03.700483   32876 system_pods.go:61] "kube-controller-manager-test-preload-668578" [bfab17ac-aaf0-4dea-bfad-88dbafaeb13b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:22:03.700487   32876 system_pods.go:61] "kube-proxy-c5ns7" [3637234c-ace6-45c0-8852-b34bfe80b9a2] Running
	I1129 09:22:03.700495   32876 system_pods.go:61] "kube-scheduler-test-preload-668578" [32b203a7-efdc-4b9e-b56b-b31f89b49bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:22:03.700501   32876 system_pods.go:61] "storage-provisioner" [a6e567cb-063f-4754-a476-dec29c7dafc7] Running
	I1129 09:22:03.700509   32876 system_pods.go:74] duration metric: took 3.733187ms to wait for pod list to return data ...
	I1129 09:22:03.700519   32876 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:22:03.702931   32876 default_sa.go:45] found service account: "default"
	I1129 09:22:03.702954   32876 default_sa.go:55] duration metric: took 2.427333ms for default service account to be created ...
	I1129 09:22:03.702964   32876 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:22:03.706922   32876 system_pods.go:86] 7 kube-system pods found
	I1129 09:22:03.706960   32876 system_pods.go:89] "coredns-66bc5c9577-t64gx" [42ed60ce-029d-48c0-88a6-4578d1d051d6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:22:03.706972   32876 system_pods.go:89] "etcd-test-preload-668578" [c2d07ecc-6ac0-428e-85d5-910c35df7156] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:22:03.706984   32876 system_pods.go:89] "kube-apiserver-test-preload-668578" [8c3c124f-18d8-4fd3-a7b0-59a5b4c07f60] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:22:03.706994   32876 system_pods.go:89] "kube-controller-manager-test-preload-668578" [bfab17ac-aaf0-4dea-bfad-88dbafaeb13b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:22:03.707003   32876 system_pods.go:89] "kube-proxy-c5ns7" [3637234c-ace6-45c0-8852-b34bfe80b9a2] Running
	I1129 09:22:03.707012   32876 system_pods.go:89] "kube-scheduler-test-preload-668578" [32b203a7-efdc-4b9e-b56b-b31f89b49bc9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:22:03.707023   32876 system_pods.go:89] "storage-provisioner" [a6e567cb-063f-4754-a476-dec29c7dafc7] Running
	I1129 09:22:03.707033   32876 system_pods.go:126] duration metric: took 4.062087ms to wait for k8s-apps to be running ...
	I1129 09:22:03.707041   32876 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:22:03.707093   32876 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:22:03.737474   32876 system_svc.go:56] duration metric: took 30.420144ms WaitForService to wait for kubelet
	I1129 09:22:03.737514   32876 kubeadm.go:587] duration metric: took 339.146595ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:22:03.737541   32876 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:22:03.740993   32876 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1129 09:22:03.741016   32876 node_conditions.go:123] node cpu capacity is 2
	I1129 09:22:03.741026   32876 node_conditions.go:105] duration metric: took 3.478758ms to run NodePressure ...
	I1129 09:22:03.741038   32876 start.go:242] waiting for startup goroutines ...
	I1129 09:22:03.750612   32876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1129 09:22:03.750948   32876 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1129 09:22:04.394940   32876 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1129 09:22:04.396222   32876 addons.go:530] duration metric: took 997.812258ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1129 09:22:04.396265   32876 start.go:247] waiting for cluster config update ...
	I1129 09:22:04.396281   32876 start.go:256] writing updated cluster config ...
	I1129 09:22:04.396529   32876 ssh_runner.go:195] Run: rm -f paused
	I1129 09:22:04.401749   32876 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:22:04.402179   32876 kapi.go:59] client config for test-preload-668578: &rest.Config{Host:"https://192.168.39.242:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22000-5651/.minikube/profiles/test-preload-668578/client.crt", KeyFile:"/home/jenkins/minikube-integration/22000-5651/.minikube/profiles/test-preload-668578/client.key", CAFile:"/home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1129 09:22:04.405068   32876 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-t64gx" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 09:22:06.411129   32876 pod_ready.go:104] pod "coredns-66bc5c9577-t64gx" is not "Ready", error: <nil>
	W1129 09:22:08.411536   32876 pod_ready.go:104] pod "coredns-66bc5c9577-t64gx" is not "Ready", error: <nil>
	W1129 09:22:10.911894   32876 pod_ready.go:104] pod "coredns-66bc5c9577-t64gx" is not "Ready", error: <nil>
	I1129 09:22:13.412746   32876 pod_ready.go:94] pod "coredns-66bc5c9577-t64gx" is "Ready"
	I1129 09:22:13.412779   32876 pod_ready.go:86] duration metric: took 9.007688806s for pod "coredns-66bc5c9577-t64gx" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:22:13.417782   32876 pod_ready.go:83] waiting for pod "etcd-test-preload-668578" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:22:13.431498   32876 pod_ready.go:94] pod "etcd-test-preload-668578" is "Ready"
	I1129 09:22:13.431528   32876 pod_ready.go:86] duration metric: took 13.718579ms for pod "etcd-test-preload-668578" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:22:13.461739   32876 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-668578" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:22:13.467388   32876 pod_ready.go:94] pod "kube-apiserver-test-preload-668578" is "Ready"
	I1129 09:22:13.467414   32876 pod_ready.go:86] duration metric: took 5.643532ms for pod "kube-apiserver-test-preload-668578" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:22:13.517670   32876 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-668578" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 09:22:15.524606   32876 pod_ready.go:104] pod "kube-controller-manager-test-preload-668578" is not "Ready", error: <nil>
	I1129 09:22:17.024476   32876 pod_ready.go:94] pod "kube-controller-manager-test-preload-668578" is "Ready"
	I1129 09:22:17.024500   32876 pod_ready.go:86] duration metric: took 3.506788016s for pod "kube-controller-manager-test-preload-668578" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:22:17.026366   32876 pod_ready.go:83] waiting for pod "kube-proxy-c5ns7" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:22:17.030605   32876 pod_ready.go:94] pod "kube-proxy-c5ns7" is "Ready"
	I1129 09:22:17.030626   32876 pod_ready.go:86] duration metric: took 4.234159ms for pod "kube-proxy-c5ns7" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:22:17.209552   32876 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-668578" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:22:17.609350   32876 pod_ready.go:94] pod "kube-scheduler-test-preload-668578" is "Ready"
	I1129 09:22:17.609375   32876 pod_ready.go:86] duration metric: took 399.795268ms for pod "kube-scheduler-test-preload-668578" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:22:17.609387   32876 pod_ready.go:40] duration metric: took 13.207613107s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:22:17.653952   32876 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:22:17.656645   32876 out.go:179] * Done! kubectl is now configured to use "test-preload-668578" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.385716838Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764408138385644020,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135214,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aad2ee4d-a626-4460-a714-937f60da8e90 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.386609960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52e34bb5-2d0b-4958-a1c8-c739dc720958 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.386675119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52e34bb5-2d0b-4958-a1c8-c739dc720958 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.386862791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1eddee28895edc7916d9abb0c50bcb733c4a3b2c6886fcac8e29742d93492508,PodSandboxId:9a4cab496f0e8660c072461574424bec20e3382d1c2539395889ba516e2dd3ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764408125515555270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-t64gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42ed60ce-029d-48c0-88a6-4578d1d051d6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c073ddb6fc49a34210eb09fdaad02aed504c41e528afad8d23c0fd170e5020,PodSandboxId:30e615bba5a8ef56af51e3a28eff8ade68c9c85e13bdba9cd1f9cf8ed07e6b52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764408121962065257,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6e567cb-063f-4754-a476-dec29c7dafc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e330b8672a2622f2b95ca71482e85616dbfa9e2e493ded9f954f30932229f0,PodSandboxId:56bfe24ddda82acb14ecc02335559164b4c47998fd616921be8cdc10bcd6f856,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764408121946959407,Labels:map[string]string{
io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5ns7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3637234c-ace6-45c0-8852-b34bfe80b9a2,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5330700ad538cffb8fdeed925196b88f01150a999a96f69c8dcc0bc5fd63660,PodSandboxId:c44c509180c357465baf0b1e76c1738146bfe91ea9756445a0645173b7a03e6f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764408118484391103,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-668578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d0fe11d7260e0dd0f2b67f4eac92842,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7acbb7b94f0b8f3f675210c46c62c73e3e799d3aadecc696b2943eee0b3c27e2,PodSandboxId:6110c1e76b4baba64a4e7ed6a6f7725bf8a62eecdf51b6385c5fc76df806f750,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:
CONTAINER_RUNNING,CreatedAt:1764408118435474366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-668578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2bb3c2707f80311e2b7a5b67e5506f2,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0984fb1168801b0bfef603467267f7ce56fc78cd071928ffab5bddbd6f6f74fa,PodSandboxId:b497016f3d3b531593ee18f07b293caca86efd7f6a1c7d0f042bdbc4eec250b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764408118395047806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-668578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ae2194b859376b1e72dd0401df2096,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d97bd5149503f30e71e0fe120307f1b12543eb7282fc4a40505015a1d3383b,PodSandboxId:41cd7485231b34d89d63dfea22a5fa068220607f163c5a3215462362bf3b6c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8d
bafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764408118379122106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-668578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce448c78644891e70e26b58fd2ede79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=52e34bb5-2d0b-4958-a1c8-c739dc720958 name=/runtime.v1.RuntimeServic
e/ListContainers
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.422718903Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54959326-ffc5-4d13-9ae4-157ad2fd4e40 name=/runtime.v1.RuntimeService/Version
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.422794828Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54959326-ffc5-4d13-9ae4-157ad2fd4e40 name=/runtime.v1.RuntimeService/Version
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.423914418Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4f3618f2-e2bc-4b97-b7f8-c8ef4032e646 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.424345794Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764408138424321828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135214,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f3618f2-e2bc-4b97-b7f8-c8ef4032e646 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.425201667Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83d748db-bf2d-47c0-8b8f-581b3568b7f0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.425306978Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83d748db-bf2d-47c0-8b8f-581b3568b7f0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.425483594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1eddee28895edc7916d9abb0c50bcb733c4a3b2c6886fcac8e29742d93492508,PodSandboxId:9a4cab496f0e8660c072461574424bec20e3382d1c2539395889ba516e2dd3ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764408125515555270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-t64gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42ed60ce-029d-48c0-88a6-4578d1d051d6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c073ddb6fc49a34210eb09fdaad02aed504c41e528afad8d23c0fd170e5020,PodSandboxId:30e615bba5a8ef56af51e3a28eff8ade68c9c85e13bdba9cd1f9cf8ed07e6b52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764408121962065257,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6e567cb-063f-4754-a476-dec29c7dafc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e330b8672a2622f2b95ca71482e85616dbfa9e2e493ded9f954f30932229f0,PodSandboxId:56bfe24ddda82acb14ecc02335559164b4c47998fd616921be8cdc10bcd6f856,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764408121946959407,Labels:map[string]string{
io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5ns7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3637234c-ace6-45c0-8852-b34bfe80b9a2,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5330700ad538cffb8fdeed925196b88f01150a999a96f69c8dcc0bc5fd63660,PodSandboxId:c44c509180c357465baf0b1e76c1738146bfe91ea9756445a0645173b7a03e6f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764408118484391103,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-668578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d0fe11d7260e0dd0f2b67f4eac92842,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7acbb7b94f0b8f3f675210c46c62c73e3e799d3aadecc696b2943eee0b3c27e2,PodSandboxId:6110c1e76b4baba64a4e7ed6a6f7725bf8a62eecdf51b6385c5fc76df806f750,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:
CONTAINER_RUNNING,CreatedAt:1764408118435474366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-668578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2bb3c2707f80311e2b7a5b67e5506f2,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0984fb1168801b0bfef603467267f7ce56fc78cd071928ffab5bddbd6f6f74fa,PodSandboxId:b497016f3d3b531593ee18f07b293caca86efd7f6a1c7d0f042bdbc4eec250b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764408118395047806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-668578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ae2194b859376b1e72dd0401df2096,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d97bd5149503f30e71e0fe120307f1b12543eb7282fc4a40505015a1d3383b,PodSandboxId:41cd7485231b34d89d63dfea22a5fa068220607f163c5a3215462362bf3b6c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8d
bafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764408118379122106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-668578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce448c78644891e70e26b58fd2ede79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83d748db-bf2d-47c0-8b8f-581b3568b7f0 name=/runtime.v1.RuntimeServic
e/ListContainers
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.458236647Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aeb599fe-ddc2-46ea-bf45-de1f828786af name=/runtime.v1.RuntimeService/Version
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.458321783Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aeb599fe-ddc2-46ea-bf45-de1f828786af name=/runtime.v1.RuntimeService/Version
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.459769784Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3c736dc7-0891-4cf7-a256-145d2e911c9b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.460654589Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764408138460630152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135214,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c736dc7-0891-4cf7-a256-145d2e911c9b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.462065675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=570c4985-879b-44c9-8ee4-e93b0e8b6715 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.462259451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=570c4985-879b-44c9-8ee4-e93b0e8b6715 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.462429993Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1eddee28895edc7916d9abb0c50bcb733c4a3b2c6886fcac8e29742d93492508,PodSandboxId:9a4cab496f0e8660c072461574424bec20e3382d1c2539395889ba516e2dd3ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764408125515555270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-t64gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42ed60ce-029d-48c0-88a6-4578d1d051d6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c073ddb6fc49a34210eb09fdaad02aed504c41e528afad8d23c0fd170e5020,PodSandboxId:30e615bba5a8ef56af51e3a28eff8ade68c9c85e13bdba9cd1f9cf8ed07e6b52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764408121962065257,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6e567cb-063f-4754-a476-dec29c7dafc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e330b8672a2622f2b95ca71482e85616dbfa9e2e493ded9f954f30932229f0,PodSandboxId:56bfe24ddda82acb14ecc02335559164b4c47998fd616921be8cdc10bcd6f856,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764408121946959407,Labels:map[string]string{
io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5ns7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3637234c-ace6-45c0-8852-b34bfe80b9a2,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5330700ad538cffb8fdeed925196b88f01150a999a96f69c8dcc0bc5fd63660,PodSandboxId:c44c509180c357465baf0b1e76c1738146bfe91ea9756445a0645173b7a03e6f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764408118484391103,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-668578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d0fe11d7260e0dd0f2b67f4eac92842,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7acbb7b94f0b8f3f675210c46c62c73e3e799d3aadecc696b2943eee0b3c27e2,PodSandboxId:6110c1e76b4baba64a4e7ed6a6f7725bf8a62eecdf51b6385c5fc76df806f750,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:
CONTAINER_RUNNING,CreatedAt:1764408118435474366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-668578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2bb3c2707f80311e2b7a5b67e5506f2,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0984fb1168801b0bfef603467267f7ce56fc78cd071928ffab5bddbd6f6f74fa,PodSandboxId:b497016f3d3b531593ee18f07b293caca86efd7f6a1c7d0f042bdbc4eec250b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764408118395047806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-668578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ae2194b859376b1e72dd0401df2096,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d97bd5149503f30e71e0fe120307f1b12543eb7282fc4a40505015a1d3383b,PodSandboxId:41cd7485231b34d89d63dfea22a5fa068220607f163c5a3215462362bf3b6c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8d
bafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764408118379122106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-668578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce448c78644891e70e26b58fd2ede79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=570c4985-879b-44c9-8ee4-e93b0e8b6715 name=/runtime.v1.RuntimeServic
e/ListContainers
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.492195591Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d3ef9e4-8194-40fe-847a-bf77a02b056c name=/runtime.v1.RuntimeService/Version
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.492280321Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d3ef9e4-8194-40fe-847a-bf77a02b056c name=/runtime.v1.RuntimeService/Version
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.493366816Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=53bf7b12-f0bc-4966-923f-1c16d0daba81 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.494345925Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764408138494320756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:135214,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=53bf7b12-f0bc-4966-923f-1c16d0daba81 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.495292329Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f72065f0-7174-4099-b0d3-9f8a175463eb name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.495355691Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f72065f0-7174-4099-b0d3-9f8a175463eb name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:22:18 test-preload-668578 crio[831]: time="2025-11-29 09:22:18.495520459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1eddee28895edc7916d9abb0c50bcb733c4a3b2c6886fcac8e29742d93492508,PodSandboxId:9a4cab496f0e8660c072461574424bec20e3382d1c2539395889ba516e2dd3ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764408125515555270,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-t64gx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42ed60ce-029d-48c0-88a6-4578d1d051d6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2c073ddb6fc49a34210eb09fdaad02aed504c41e528afad8d23c0fd170e5020,PodSandboxId:30e615bba5a8ef56af51e3a28eff8ade68c9c85e13bdba9cd1f9cf8ed07e6b52,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1764408121962065257,Labe
ls:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6e567cb-063f-4754-a476-dec29c7dafc7,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65e330b8672a2622f2b95ca71482e85616dbfa9e2e493ded9f954f30932229f0,PodSandboxId:56bfe24ddda82acb14ecc02335559164b4c47998fd616921be8cdc10bcd6f856,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764408121946959407,Labels:map[string]string{
io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-c5ns7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3637234c-ace6-45c0-8852-b34bfe80b9a2,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5330700ad538cffb8fdeed925196b88f01150a999a96f69c8dcc0bc5fd63660,PodSandboxId:c44c509180c357465baf0b1e76c1738146bfe91ea9756445a0645173b7a03e6f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764408118484391103,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-668578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d0fe11d7260e0dd0f2b67f4eac92842,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7acbb7b94f0b8f3f675210c46c62c73e3e799d3aadecc696b2943eee0b3c27e2,PodSandboxId:6110c1e76b4baba64a4e7ed6a6f7725bf8a62eecdf51b6385c5fc76df806f750,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:
CONTAINER_RUNNING,CreatedAt:1764408118435474366,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-668578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2bb3c2707f80311e2b7a5b67e5506f2,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0984fb1168801b0bfef603467267f7ce56fc78cd071928ffab5bddbd6f6f74fa,PodSandboxId:b497016f3d3b531593ee18f07b293caca86efd7f6a1c7d0f042bdbc4eec250b6,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764408118395047806,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-668578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27ae2194b859376b1e72dd0401df2096,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d97bd5149503f30e71e0fe120307f1b12543eb7282fc4a40505015a1d3383b,PodSandboxId:41cd7485231b34d89d63dfea22a5fa068220607f163c5a3215462362bf3b6c04,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8d
bafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764408118379122106,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-668578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bce448c78644891e70e26b58fd2ede79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f72065f0-7174-4099-b0d3-9f8a175463eb name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	1eddee28895ed       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   1                   9a4cab496f0e8       coredns-66bc5c9577-t64gx                      kube-system
	c2c073ddb6fc4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       2                   30e615bba5a8e       storage-provisioner                           kube-system
	65e330b8672a2       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   16 seconds ago      Running             kube-proxy                1                   56bfe24ddda82       kube-proxy-c5ns7                              kube-system
	e5330700ad538       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   20 seconds ago      Running             kube-scheduler            1                   c44c509180c35       kube-scheduler-test-preload-668578            kube-system
	7acbb7b94f0b8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   20 seconds ago      Running             etcd                      1                   6110c1e76b4ba       etcd-test-preload-668578                      kube-system
	0984fb1168801       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   20 seconds ago      Running             kube-apiserver            1                   b497016f3d3b5       kube-apiserver-test-preload-668578            kube-system
	81d97bd514950       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   20 seconds ago      Running             kube-controller-manager   1                   41cd7485231b3       kube-controller-manager-test-preload-668578   kube-system
	
	
	==> coredns [1eddee28895edc7916d9abb0c50bcb733c4a3b2c6886fcac8e29742d93492508] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42549 - 5483 "HINFO IN 3298546311479397156.6747496781341124561. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062960909s
	
	
	==> describe nodes <==
	Name:               test-preload-668578
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-668578
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=test-preload-668578
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_20_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:20:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-668578
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:22:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:22:03 +0000   Sat, 29 Nov 2025 09:20:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:22:03 +0000   Sat, 29 Nov 2025 09:20:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:22:03 +0000   Sat, 29 Nov 2025 09:20:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:22:03 +0000   Sat, 29 Nov 2025 09:22:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.242
	  Hostname:    test-preload-668578
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 cf14fbf5b44c4932b6a5e28de54952ee
	  System UUID:                cf14fbf5-b44c-4932-b6a5-e28de54952ee
	  Boot ID:                    6133bdfb-335a-44e8-a026-23bd37219884
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-t64gx                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     96s
	  kube-system                 etcd-test-preload-668578                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         101s
	  kube-system                 kube-apiserver-test-preload-668578             250m (12%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-controller-manager-test-preload-668578    200m (10%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-proxy-c5ns7                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-scheduler-test-preload-668578             100m (5%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 95s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   NodeHasSufficientMemory  101s               kubelet          Node test-preload-668578 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  101s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    101s               kubelet          Node test-preload-668578 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     101s               kubelet          Node test-preload-668578 status is now: NodeHasSufficientPID
	  Normal   Starting                 101s               kubelet          Starting kubelet.
	  Normal   NodeReady                100s               kubelet          Node test-preload-668578 status is now: NodeReady
	  Normal   RegisteredNode           97s                node-controller  Node test-preload-668578 event: Registered Node test-preload-668578 in Controller
	  Normal   Starting                 21s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node test-preload-668578 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node test-preload-668578 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node test-preload-668578 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                kubelet          Node test-preload-668578 has been rebooted, boot id: 6133bdfb-335a-44e8-a026-23bd37219884
	  Normal   RegisteredNode           14s                node-controller  Node test-preload-668578 event: Registered Node test-preload-668578 in Controller
	
	
	==> dmesg <==
	[Nov29 09:21] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001169] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003118] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.954283] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.103698] kauditd_printk_skb: 88 callbacks suppressed
	[Nov29 09:22] kauditd_printk_skb: 196 callbacks suppressed
	[  +8.060291] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [7acbb7b94f0b8f3f675210c46c62c73e3e799d3aadecc696b2943eee0b3c27e2] <==
	{"level":"warn","ts":"2025-11-29T09:22:00.303197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.326527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.338193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.349058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.360549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.377784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.385176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.398453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.412266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.428567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.440902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.455680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.469633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.491000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.504370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.514170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.524765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.539731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.561901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.574055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.588181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.600186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.619507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.639869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:22:00.714650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50260","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:22:18 up 0 min,  0 users,  load average: 0.69, 0.19, 0.06
	Linux test-preload-668578 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [0984fb1168801b0bfef603467267f7ce56fc78cd071928ffab5bddbd6f6f74fa] <==
	I1129 09:22:01.444303       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1129 09:22:01.444412       1 policy_source.go:240] refreshing policies
	I1129 09:22:01.450795       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:22:01.463175       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1129 09:22:01.463207       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1129 09:22:01.463453       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1129 09:22:01.464116       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 09:22:01.467881       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1129 09:22:01.467897       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1129 09:22:01.467913       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1129 09:22:01.467924       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1129 09:22:01.474273       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1129 09:22:01.474339       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1129 09:22:01.474756       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 09:22:01.486908       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:22:01.486196       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1129 09:22:01.491375       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 09:22:02.273322       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:22:03.177462       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:22:03.221348       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:22:03.264970       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:22:03.278535       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:22:04.790934       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:22:04.942287       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:22:05.041847       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [81d97bd5149503f30e71e0fe120307f1b12543eb7282fc4a40505015a1d3383b] <==
	I1129 09:22:04.737246       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1129 09:22:04.737389       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1129 09:22:04.737447       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1129 09:22:04.738680       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1129 09:22:04.738699       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1129 09:22:04.738732       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 09:22:04.740017       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1129 09:22:04.741052       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1129 09:22:04.742180       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:22:04.743437       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:22:04.746786       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1129 09:22:04.748115       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1129 09:22:04.749345       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 09:22:04.767721       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1129 09:22:04.767750       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1129 09:22:04.769364       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:22:04.776644       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1129 09:22:04.778884       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1129 09:22:04.780113       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 09:22:04.780149       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 09:22:04.784461       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 09:22:04.789672       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 09:22:04.789794       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 09:22:04.789865       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-668578"
	I1129 09:22:04.789913       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [65e330b8672a2622f2b95ca71482e85616dbfa9e2e493ded9f954f30932229f0] <==
	I1129 09:22:02.168378       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:22:02.269009       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:22:02.269182       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.242"]
	E1129 09:22:02.269628       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:22:02.344263       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1129 09:22:02.344323       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1129 09:22:02.344350       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:22:02.360471       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:22:02.360823       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:22:02.360849       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:22:02.365764       1 config.go:200] "Starting service config controller"
	I1129 09:22:02.365789       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:22:02.365809       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:22:02.365813       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:22:02.365852       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:22:02.365861       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:22:02.369527       1 config.go:309] "Starting node config controller"
	I1129 09:22:02.369646       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:22:02.369653       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:22:02.466342       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:22:02.466481       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1129 09:22:02.466971       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e5330700ad538cffb8fdeed925196b88f01150a999a96f69c8dcc0bc5fd63660] <==
	I1129 09:21:59.836417       1 serving.go:386] Generated self-signed cert in-memory
	W1129 09:22:01.372978       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 09:22:01.373012       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 09:22:01.373021       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 09:22:01.373028       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 09:22:01.409623       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 09:22:01.410920       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:22:01.413398       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:22:01.413626       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:22:01.413668       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:22:01.417317       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 09:22:01.513923       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:22:01 test-preload-668578 kubelet[1162]: I1129 09:22:01.493512    1162 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-668578"
	Nov 29 09:22:01 test-preload-668578 kubelet[1162]: E1129 09:22:01.525241    1162 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-668578\" already exists" pod="kube-system/kube-scheduler-test-preload-668578"
	Nov 29 09:22:01 test-preload-668578 kubelet[1162]: I1129 09:22:01.525286    1162 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-668578"
	Nov 29 09:22:01 test-preload-668578 kubelet[1162]: I1129 09:22:01.543002    1162 kubelet_node_status.go:124] "Node was previously registered" node="test-preload-668578"
	Nov 29 09:22:01 test-preload-668578 kubelet[1162]: I1129 09:22:01.543328    1162 kubelet_node_status.go:78] "Successfully registered node" node="test-preload-668578"
	Nov 29 09:22:01 test-preload-668578 kubelet[1162]: I1129 09:22:01.543480    1162 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:22:01 test-preload-668578 kubelet[1162]: I1129 09:22:01.544554    1162 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:22:01 test-preload-668578 kubelet[1162]: E1129 09:22:01.546544    1162 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-668578\" already exists" pod="kube-system/kube-apiserver-test-preload-668578"
	Nov 29 09:22:01 test-preload-668578 kubelet[1162]: I1129 09:22:01.548167    1162 setters.go:543] "Node became not ready" node="test-preload-668578" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-29T09:22:01Z","lastTransitionTime":"2025-11-29T09:22:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Nov 29 09:22:01 test-preload-668578 kubelet[1162]: I1129 09:22:01.557438    1162 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-668578"
	Nov 29 09:22:01 test-preload-668578 kubelet[1162]: I1129 09:22:01.558682    1162 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-668578"
	Nov 29 09:22:01 test-preload-668578 kubelet[1162]: I1129 09:22:01.558911    1162 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-668578"
	Nov 29 09:22:01 test-preload-668578 kubelet[1162]: E1129 09:22:01.581163    1162 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-test-preload-668578\" already exists" pod="kube-system/etcd-test-preload-668578"
	Nov 29 09:22:01 test-preload-668578 kubelet[1162]: E1129 09:22:01.597093    1162 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-668578\" already exists" pod="kube-system/kube-scheduler-test-preload-668578"
	Nov 29 09:22:01 test-preload-668578 kubelet[1162]: E1129 09:22:01.597115    1162 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-668578\" already exists" pod="kube-system/kube-apiserver-test-preload-668578"
	Nov 29 09:22:01 test-preload-668578 kubelet[1162]: E1129 09:22:01.977124    1162 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 29 09:22:01 test-preload-668578 kubelet[1162]: E1129 09:22:01.977425    1162 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/42ed60ce-029d-48c0-88a6-4578d1d051d6-config-volume podName:42ed60ce-029d-48c0-88a6-4578d1d051d6 nodeName:}" failed. No retries permitted until 2025-11-29 09:22:02.977408413 +0000 UTC m=+5.671807510 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/42ed60ce-029d-48c0-88a6-4578d1d051d6-config-volume") pod "coredns-66bc5c9577-t64gx" (UID: "42ed60ce-029d-48c0-88a6-4578d1d051d6") : object "kube-system"/"coredns" not registered
	Nov 29 09:22:02 test-preload-668578 kubelet[1162]: E1129 09:22:02.986011    1162 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 29 09:22:02 test-preload-668578 kubelet[1162]: E1129 09:22:02.986837    1162 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/42ed60ce-029d-48c0-88a6-4578d1d051d6-config-volume podName:42ed60ce-029d-48c0-88a6-4578d1d051d6 nodeName:}" failed. No retries permitted until 2025-11-29 09:22:04.986812017 +0000 UTC m=+7.681211110 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/42ed60ce-029d-48c0-88a6-4578d1d051d6-config-volume") pod "coredns-66bc5c9577-t64gx" (UID: "42ed60ce-029d-48c0-88a6-4578d1d051d6") : object "kube-system"/"coredns" not registered
	Nov 29 09:22:03 test-preload-668578 kubelet[1162]: I1129 09:22:03.240392    1162 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 29 09:22:07 test-preload-668578 kubelet[1162]: E1129 09:22:07.491530    1162 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764408127491220877  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135214}  inodes_used:{value:64}}"
	Nov 29 09:22:07 test-preload-668578 kubelet[1162]: E1129 09:22:07.491566    1162 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764408127491220877  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135214}  inodes_used:{value:64}}"
	Nov 29 09:22:13 test-preload-668578 kubelet[1162]: I1129 09:22:13.333900    1162 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 29 09:22:17 test-preload-668578 kubelet[1162]: E1129 09:22:17.493913    1162 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764408137493079803  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135214}  inodes_used:{value:64}}"
	Nov 29 09:22:17 test-preload-668578 kubelet[1162]: E1129 09:22:17.494301    1162 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764408137493079803  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:135214}  inodes_used:{value:64}}"
	
	
	==> storage-provisioner [c2c073ddb6fc49a34210eb09fdaad02aed504c41e528afad8d23c0fd170e5020] <==
	I1129 09:22:02.077098       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-668578 -n test-preload-668578
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-668578 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-668578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-668578
--- FAIL: TestPreload (153.63s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.07s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-893760 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-893760 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.8815562s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-893760] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-893760" primary control-plane node in "pause-893760" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-893760" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:29:38.022107   40298 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:29:38.022356   40298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:29:38.022369   40298 out.go:374] Setting ErrFile to fd 2...
	I1129 09:29:38.022376   40298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:29:38.022695   40298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
	I1129 09:29:38.023226   40298 out.go:368] Setting JSON to false
	I1129 09:29:38.024269   40298 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4322,"bootTime":1764404256,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:29:38.024340   40298 start.go:143] virtualization: kvm guest
	I1129 09:29:38.026676   40298 out.go:179] * [pause-893760] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:29:38.028135   40298 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:29:38.028123   40298 notify.go:221] Checking for updates...
	I1129 09:29:38.030679   40298 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:29:38.032320   40298 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig
	I1129 09:29:38.033604   40298 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube
	I1129 09:29:38.034889   40298 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:29:38.036013   40298 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:29:38.037701   40298 config.go:182] Loaded profile config "pause-893760": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:29:38.038219   40298 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:29:38.079014   40298 out.go:179] * Using the kvm2 driver based on existing profile
	I1129 09:29:38.080037   40298 start.go:309] selected driver: kvm2
	I1129 09:29:38.080052   40298 start.go:927] validating driver "kvm2" against &{Name:pause-893760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-893760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.104 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:29:38.080178   40298 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:29:38.081242   40298 cni.go:84] Creating CNI manager for ""
	I1129 09:29:38.081298   40298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1129 09:29:38.081347   40298 start.go:353] cluster config:
	{Name:pause-893760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-893760 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.104 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:29:38.081460   40298 iso.go:125] acquiring lock: {Name:mk0184b92a126aea44cd27d4836c247b817b0333 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:29:38.082769   40298 out.go:179] * Starting "pause-893760" primary control-plane node in "pause-893760" cluster
	I1129 09:29:38.083786   40298 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:29:38.083816   40298 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:29:38.083824   40298 cache.go:65] Caching tarball of preloaded images
	I1129 09:29:38.083943   40298 preload.go:238] Found /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:29:38.083957   40298 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:29:38.084086   40298 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/pause-893760/config.json ...
	I1129 09:29:38.084413   40298 start.go:360] acquireMachinesLock for pause-893760: {Name:mke0bd376b87e419ebada00803bbcbb9230316d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1129 09:29:40.658815   40298 start.go:364] duration metric: took 2.574346244s to acquireMachinesLock for "pause-893760"
	I1129 09:29:40.658890   40298 start.go:96] Skipping create...Using existing machine configuration
	I1129 09:29:40.658899   40298 fix.go:54] fixHost starting: 
	I1129 09:29:40.661725   40298 fix.go:112] recreateIfNeeded on pause-893760: state=Running err=<nil>
	W1129 09:29:40.661748   40298 fix.go:138] unexpected machine state, will restart: <nil>
	I1129 09:29:40.663871   40298 out.go:252] * Updating the running kvm2 "pause-893760" VM ...
	I1129 09:29:40.663906   40298 machine.go:94] provisionDockerMachine start ...
	I1129 09:29:40.668230   40298 main.go:143] libmachine: domain pause-893760 has defined MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:40.669365   40298 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dc:6a:a2", ip: ""} in network mk-pause-893760: {Iface:virbr5 ExpiryTime:2025-11-29 10:28:33 +0000 UTC Type:0 Mac:52:54:00:dc:6a:a2 Iaid: IPaddr:192.168.83.104 Prefix:24 Hostname:pause-893760 Clientid:01:52:54:00:dc:6a:a2}
	I1129 09:29:40.669406   40298 main.go:143] libmachine: domain pause-893760 has defined IP address 192.168.83.104 and MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:40.669756   40298 main.go:143] libmachine: Using SSH client type: native
	I1129 09:29:40.670067   40298 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.104 22 <nil> <nil>}
	I1129 09:29:40.670079   40298 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:29:40.791103   40298 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-893760
	
	I1129 09:29:40.791150   40298 buildroot.go:166] provisioning hostname "pause-893760"
	I1129 09:29:40.794710   40298 main.go:143] libmachine: domain pause-893760 has defined MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:40.795246   40298 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dc:6a:a2", ip: ""} in network mk-pause-893760: {Iface:virbr5 ExpiryTime:2025-11-29 10:28:33 +0000 UTC Type:0 Mac:52:54:00:dc:6a:a2 Iaid: IPaddr:192.168.83.104 Prefix:24 Hostname:pause-893760 Clientid:01:52:54:00:dc:6a:a2}
	I1129 09:29:40.795276   40298 main.go:143] libmachine: domain pause-893760 has defined IP address 192.168.83.104 and MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:40.795464   40298 main.go:143] libmachine: Using SSH client type: native
	I1129 09:29:40.795727   40298 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.104 22 <nil> <nil>}
	I1129 09:29:40.795744   40298 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-893760 && echo "pause-893760" | sudo tee /etc/hostname
	I1129 09:29:40.946964   40298 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-893760
	
	I1129 09:29:40.950875   40298 main.go:143] libmachine: domain pause-893760 has defined MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:40.951450   40298 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dc:6a:a2", ip: ""} in network mk-pause-893760: {Iface:virbr5 ExpiryTime:2025-11-29 10:28:33 +0000 UTC Type:0 Mac:52:54:00:dc:6a:a2 Iaid: IPaddr:192.168.83.104 Prefix:24 Hostname:pause-893760 Clientid:01:52:54:00:dc:6a:a2}
	I1129 09:29:40.951480   40298 main.go:143] libmachine: domain pause-893760 has defined IP address 192.168.83.104 and MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:40.951701   40298 main.go:143] libmachine: Using SSH client type: native
	I1129 09:29:40.951960   40298 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.104 22 <nil> <nil>}
	I1129 09:29:40.951981   40298 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-893760' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-893760/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-893760' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:29:41.074956   40298 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:29:41.074995   40298 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5651/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5651/.minikube}
	I1129 09:29:41.075041   40298 buildroot.go:174] setting up certificates
	I1129 09:29:41.075054   40298 provision.go:84] configureAuth start
	I1129 09:29:41.078409   40298 main.go:143] libmachine: domain pause-893760 has defined MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:41.078925   40298 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dc:6a:a2", ip: ""} in network mk-pause-893760: {Iface:virbr5 ExpiryTime:2025-11-29 10:28:33 +0000 UTC Type:0 Mac:52:54:00:dc:6a:a2 Iaid: IPaddr:192.168.83.104 Prefix:24 Hostname:pause-893760 Clientid:01:52:54:00:dc:6a:a2}
	I1129 09:29:41.078959   40298 main.go:143] libmachine: domain pause-893760 has defined IP address 192.168.83.104 and MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:41.081614   40298 main.go:143] libmachine: domain pause-893760 has defined MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:41.082192   40298 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dc:6a:a2", ip: ""} in network mk-pause-893760: {Iface:virbr5 ExpiryTime:2025-11-29 10:28:33 +0000 UTC Type:0 Mac:52:54:00:dc:6a:a2 Iaid: IPaddr:192.168.83.104 Prefix:24 Hostname:pause-893760 Clientid:01:52:54:00:dc:6a:a2}
	I1129 09:29:41.082222   40298 main.go:143] libmachine: domain pause-893760 has defined IP address 192.168.83.104 and MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:41.082446   40298 provision.go:143] copyHostCerts
	I1129 09:29:41.082501   40298 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5651/.minikube/ca.pem, removing ...
	I1129 09:29:41.082512   40298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5651/.minikube/ca.pem
	I1129 09:29:41.082570   40298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5651/.minikube/ca.pem (1082 bytes)
	I1129 09:29:41.082677   40298 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5651/.minikube/cert.pem, removing ...
	I1129 09:29:41.082690   40298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5651/.minikube/cert.pem
	I1129 09:29:41.082713   40298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5651/.minikube/cert.pem (1123 bytes)
	I1129 09:29:41.082764   40298 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5651/.minikube/key.pem, removing ...
	I1129 09:29:41.082771   40298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5651/.minikube/key.pem
	I1129 09:29:41.082789   40298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5651/.minikube/key.pem (1679 bytes)
	I1129 09:29:41.082884   40298 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5651/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca-key.pem org=jenkins.pause-893760 san=[127.0.0.1 192.168.83.104 localhost minikube pause-893760]
	I1129 09:29:41.266431   40298 provision.go:177] copyRemoteCerts
	I1129 09:29:41.266498   40298 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:29:41.270344   40298 main.go:143] libmachine: domain pause-893760 has defined MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:41.270817   40298 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dc:6a:a2", ip: ""} in network mk-pause-893760: {Iface:virbr5 ExpiryTime:2025-11-29 10:28:33 +0000 UTC Type:0 Mac:52:54:00:dc:6a:a2 Iaid: IPaddr:192.168.83.104 Prefix:24 Hostname:pause-893760 Clientid:01:52:54:00:dc:6a:a2}
	I1129 09:29:41.270875   40298 main.go:143] libmachine: domain pause-893760 has defined IP address 192.168.83.104 and MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:41.271109   40298 sshutil.go:53] new ssh client: &{IP:192.168.83.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/pause-893760/id_rsa Username:docker}
	I1129 09:29:41.372810   40298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 09:29:41.414001   40298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1129 09:29:41.450509   40298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1129 09:29:41.490668   40298 provision.go:87] duration metric: took 415.599353ms to configureAuth
	I1129 09:29:41.490704   40298 buildroot.go:189] setting minikube options for container-runtime
	I1129 09:29:41.491018   40298 config.go:182] Loaded profile config "pause-893760": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:29:41.494717   40298 main.go:143] libmachine: domain pause-893760 has defined MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:41.495327   40298 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dc:6a:a2", ip: ""} in network mk-pause-893760: {Iface:virbr5 ExpiryTime:2025-11-29 10:28:33 +0000 UTC Type:0 Mac:52:54:00:dc:6a:a2 Iaid: IPaddr:192.168.83.104 Prefix:24 Hostname:pause-893760 Clientid:01:52:54:00:dc:6a:a2}
	I1129 09:29:41.495357   40298 main.go:143] libmachine: domain pause-893760 has defined IP address 192.168.83.104 and MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:41.495602   40298 main.go:143] libmachine: Using SSH client type: native
	I1129 09:29:41.495871   40298 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.104 22 <nil> <nil>}
	I1129 09:29:41.495897   40298 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:29:47.115744   40298 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:29:47.115771   40298 machine.go:97] duration metric: took 6.451857244s to provisionDockerMachine
	I1129 09:29:47.115820   40298 start.go:293] postStartSetup for "pause-893760" (driver="kvm2")
	I1129 09:29:47.115868   40298 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:29:47.115936   40298 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:29:47.119587   40298 main.go:143] libmachine: domain pause-893760 has defined MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:47.120131   40298 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dc:6a:a2", ip: ""} in network mk-pause-893760: {Iface:virbr5 ExpiryTime:2025-11-29 10:28:33 +0000 UTC Type:0 Mac:52:54:00:dc:6a:a2 Iaid: IPaddr:192.168.83.104 Prefix:24 Hostname:pause-893760 Clientid:01:52:54:00:dc:6a:a2}
	I1129 09:29:47.120174   40298 main.go:143] libmachine: domain pause-893760 has defined IP address 192.168.83.104 and MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:47.120358   40298 sshutil.go:53] new ssh client: &{IP:192.168.83.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/pause-893760/id_rsa Username:docker}
	I1129 09:29:47.212647   40298 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:29:47.217645   40298 info.go:137] Remote host: Buildroot 2025.02
	I1129 09:29:47.217673   40298 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5651/.minikube/addons for local assets ...
	I1129 09:29:47.217755   40298 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5651/.minikube/files for local assets ...
	I1129 09:29:47.217875   40298 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5651/.minikube/files/etc/ssl/certs/96132.pem -> 96132.pem in /etc/ssl/certs
	I1129 09:29:47.218022   40298 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:29:47.230220   40298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/files/etc/ssl/certs/96132.pem --> /etc/ssl/certs/96132.pem (1708 bytes)
	I1129 09:29:47.266909   40298 start.go:296] duration metric: took 151.023976ms for postStartSetup
	I1129 09:29:47.266956   40298 fix.go:56] duration metric: took 6.608059583s for fixHost
	I1129 09:29:47.269539   40298 main.go:143] libmachine: domain pause-893760 has defined MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:47.270081   40298 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dc:6a:a2", ip: ""} in network mk-pause-893760: {Iface:virbr5 ExpiryTime:2025-11-29 10:28:33 +0000 UTC Type:0 Mac:52:54:00:dc:6a:a2 Iaid: IPaddr:192.168.83.104 Prefix:24 Hostname:pause-893760 Clientid:01:52:54:00:dc:6a:a2}
	I1129 09:29:47.270121   40298 main.go:143] libmachine: domain pause-893760 has defined IP address 192.168.83.104 and MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:47.270342   40298 main.go:143] libmachine: Using SSH client type: native
	I1129 09:29:47.270602   40298 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.83.104 22 <nil> <nil>}
	I1129 09:29:47.270621   40298 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1129 09:29:47.391054   40298 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764408587.385370842
	
	I1129 09:29:47.391077   40298 fix.go:216] guest clock: 1764408587.385370842
	I1129 09:29:47.391085   40298 fix.go:229] Guest: 2025-11-29 09:29:47.385370842 +0000 UTC Remote: 2025-11-29 09:29:47.266960386 +0000 UTC m=+9.301302801 (delta=118.410456ms)
	I1129 09:29:47.391100   40298 fix.go:200] guest clock delta is within tolerance: 118.410456ms
	I1129 09:29:47.391105   40298 start.go:83] releasing machines lock for "pause-893760", held for 6.73223483s
	I1129 09:29:47.393869   40298 main.go:143] libmachine: domain pause-893760 has defined MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:47.394543   40298 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dc:6a:a2", ip: ""} in network mk-pause-893760: {Iface:virbr5 ExpiryTime:2025-11-29 10:28:33 +0000 UTC Type:0 Mac:52:54:00:dc:6a:a2 Iaid: IPaddr:192.168.83.104 Prefix:24 Hostname:pause-893760 Clientid:01:52:54:00:dc:6a:a2}
	I1129 09:29:47.394571   40298 main.go:143] libmachine: domain pause-893760 has defined IP address 192.168.83.104 and MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:47.395199   40298 ssh_runner.go:195] Run: cat /version.json
	I1129 09:29:47.395246   40298 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:29:47.398934   40298 main.go:143] libmachine: domain pause-893760 has defined MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:47.398977   40298 main.go:143] libmachine: domain pause-893760 has defined MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:47.399429   40298 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dc:6a:a2", ip: ""} in network mk-pause-893760: {Iface:virbr5 ExpiryTime:2025-11-29 10:28:33 +0000 UTC Type:0 Mac:52:54:00:dc:6a:a2 Iaid: IPaddr:192.168.83.104 Prefix:24 Hostname:pause-893760 Clientid:01:52:54:00:dc:6a:a2}
	I1129 09:29:47.399470   40298 main.go:143] libmachine: domain pause-893760 has defined IP address 192.168.83.104 and MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:47.399516   40298 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dc:6a:a2", ip: ""} in network mk-pause-893760: {Iface:virbr5 ExpiryTime:2025-11-29 10:28:33 +0000 UTC Type:0 Mac:52:54:00:dc:6a:a2 Iaid: IPaddr:192.168.83.104 Prefix:24 Hostname:pause-893760 Clientid:01:52:54:00:dc:6a:a2}
	I1129 09:29:47.399550   40298 main.go:143] libmachine: domain pause-893760 has defined IP address 192.168.83.104 and MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:47.399644   40298 sshutil.go:53] new ssh client: &{IP:192.168.83.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/pause-893760/id_rsa Username:docker}
	I1129 09:29:47.399887   40298 sshutil.go:53] new ssh client: &{IP:192.168.83.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/pause-893760/id_rsa Username:docker}
	I1129 09:29:47.516003   40298 ssh_runner.go:195] Run: systemctl --version
	I1129 09:29:47.522801   40298 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:29:47.679854   40298 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:29:47.689643   40298 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:29:47.689752   40298 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:29:47.705320   40298 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1129 09:29:47.705352   40298 start.go:496] detecting cgroup driver to use...
	I1129 09:29:47.705446   40298 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:29:47.734334   40298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:29:47.754482   40298 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:29:47.754541   40298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:29:47.776912   40298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:29:47.799020   40298 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:29:48.002872   40298 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:29:48.183157   40298 docker.go:234] disabling docker service ...
	I1129 09:29:48.183255   40298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:29:48.215891   40298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:29:48.239892   40298 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:29:48.456590   40298 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:29:48.651258   40298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:29:48.675913   40298 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:29:48.706263   40298 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:29:48.706357   40298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:29:48.720317   40298 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 09:29:48.720395   40298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:29:48.737594   40298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:29:48.753980   40298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:29:48.767760   40298 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:29:48.783054   40298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:29:48.797890   40298 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:29:48.813957   40298 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:29:48.828759   40298 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:29:48.841153   40298 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:29:48.854702   40298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:29:49.066053   40298 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:29:49.353036   40298 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:29:49.353116   40298 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:29:49.359074   40298 start.go:564] Will wait 60s for crictl version
	I1129 09:29:49.359144   40298 ssh_runner.go:195] Run: which crictl
	I1129 09:29:49.363725   40298 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1129 09:29:49.401535   40298 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1129 09:29:49.401654   40298 ssh_runner.go:195] Run: crio --version
	I1129 09:29:49.440717   40298 ssh_runner.go:195] Run: crio --version
	I1129 09:29:49.479948   40298 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1129 09:29:49.484428   40298 main.go:143] libmachine: domain pause-893760 has defined MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:49.484929   40298 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:dc:6a:a2", ip: ""} in network mk-pause-893760: {Iface:virbr5 ExpiryTime:2025-11-29 10:28:33 +0000 UTC Type:0 Mac:52:54:00:dc:6a:a2 Iaid: IPaddr:192.168.83.104 Prefix:24 Hostname:pause-893760 Clientid:01:52:54:00:dc:6a:a2}
	I1129 09:29:49.484961   40298 main.go:143] libmachine: domain pause-893760 has defined IP address 192.168.83.104 and MAC address 52:54:00:dc:6a:a2 in network mk-pause-893760
	I1129 09:29:49.485202   40298 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1129 09:29:49.491589   40298 kubeadm.go:884] updating cluster {Name:pause-893760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-893760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.104 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:29:49.491798   40298 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:29:49.491919   40298 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:29:49.536956   40298 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:29:49.536976   40298 crio.go:433] Images already preloaded, skipping extraction
	I1129 09:29:49.537016   40298 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:29:49.576402   40298 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:29:49.576432   40298 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:29:49.576442   40298 kubeadm.go:935] updating node { 192.168.83.104 8443 v1.34.1 crio true true} ...
	I1129 09:29:49.576553   40298 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-893760 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-893760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:29:49.576645   40298 ssh_runner.go:195] Run: crio config
	I1129 09:29:49.635676   40298 cni.go:84] Creating CNI manager for ""
	I1129 09:29:49.635711   40298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1129 09:29:49.635731   40298 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:29:49.635758   40298 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.104 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-893760 NodeName:pause-893760 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.104"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.104 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:29:49.635944   40298 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.104
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-893760"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.104"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.104"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:29:49.636032   40298 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:29:49.650987   40298 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:29:49.651065   40298 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:29:49.666010   40298 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1129 09:29:49.693154   40298 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:29:49.725089   40298 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1129 09:29:49.749809   40298 ssh_runner.go:195] Run: grep 192.168.83.104	control-plane.minikube.internal$ /etc/hosts
	I1129 09:29:49.754795   40298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:29:49.934190   40298 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:29:49.955757   40298 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/pause-893760 for IP: 192.168.83.104
	I1129 09:29:49.955782   40298 certs.go:195] generating shared ca certs ...
	I1129 09:29:49.955800   40298 certs.go:227] acquiring lock for ca certs: {Name:mk263acc791d5a2c77504c81548ce554781ff9eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:29:49.956007   40298 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5651/.minikube/ca.key
	I1129 09:29:49.956077   40298 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.key
	I1129 09:29:49.956093   40298 certs.go:257] generating profile certs ...
	I1129 09:29:49.956204   40298 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/pause-893760/client.key
	I1129 09:29:49.956277   40298 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/pause-893760/apiserver.key.c425fe20
	I1129 09:29:49.956336   40298 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/pause-893760/proxy-client.key
	I1129 09:29:49.956502   40298 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/9613.pem (1338 bytes)
	W1129 09:29:49.956600   40298 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5651/.minikube/certs/9613_empty.pem, impossibly tiny 0 bytes
	I1129 09:29:49.956617   40298 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:29:49.956652   40298 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem (1082 bytes)
	I1129 09:29:49.956694   40298 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:29:49.956728   40298 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/key.pem (1679 bytes)
	I1129 09:29:49.956793   40298 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/files/etc/ssl/certs/96132.pem (1708 bytes)
	I1129 09:29:49.957418   40298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:29:49.989734   40298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:29:50.022803   40298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:29:50.056913   40298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:29:50.092000   40298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/pause-893760/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1129 09:29:50.127991   40298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/pause-893760/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:29:50.160235   40298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/pause-893760/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:29:50.192137   40298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/pause-893760/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:29:50.233615   40298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/certs/9613.pem --> /usr/share/ca-certificates/9613.pem (1338 bytes)
	I1129 09:29:50.270443   40298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/files/etc/ssl/certs/96132.pem --> /usr/share/ca-certificates/96132.pem (1708 bytes)
	I1129 09:29:50.307192   40298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:29:50.343892   40298 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:29:50.374725   40298 ssh_runner.go:195] Run: openssl version
	I1129 09:29:50.383192   40298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96132.pem && ln -fs /usr/share/ca-certificates/96132.pem /etc/ssl/certs/96132.pem"
	I1129 09:29:50.400778   40298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96132.pem
	I1129 09:29:50.406841   40298 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:36 /usr/share/ca-certificates/96132.pem
	I1129 09:29:50.406915   40298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96132.pem
	I1129 09:29:50.416099   40298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96132.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:29:50.430992   40298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:29:50.448477   40298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:29:50.454938   40298 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:29:50.455030   40298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:29:50.463760   40298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:29:50.477331   40298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9613.pem && ln -fs /usr/share/ca-certificates/9613.pem /etc/ssl/certs/9613.pem"
	I1129 09:29:50.497289   40298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9613.pem
	I1129 09:29:50.503667   40298 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:36 /usr/share/ca-certificates/9613.pem
	I1129 09:29:50.503747   40298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9613.pem
	I1129 09:29:50.514004   40298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9613.pem /etc/ssl/certs/51391683.0"
	I1129 09:29:50.580651   40298 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:29:50.604331   40298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1129 09:29:50.620214   40298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1129 09:29:50.636045   40298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1129 09:29:50.651551   40298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1129 09:29:50.666066   40298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1129 09:29:50.679273   40298 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1129 09:29:50.694337   40298 kubeadm.go:401] StartCluster: {Name:pause-893760 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-893760 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.104 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:29:50.694502   40298 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:29:50.694623   40298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:29:50.845423   40298 cri.go:89] found id: "c6ef4430e1d2bb8f6efd3aaff4706e7b741d6c4ede2877fa5847dff6b81a716e"
	I1129 09:29:50.845450   40298 cri.go:89] found id: "0f3d4dff0b125e274f62cb724c266302348a66b8de1a6a252436e2ed3ed40d14"
	I1129 09:29:50.845457   40298 cri.go:89] found id: "76309cd327d8550266d604a0e241c85c25fd78d3d5d17921de3f8cf4f0627e34"
	I1129 09:29:50.845462   40298 cri.go:89] found id: "178b3ab1cb251b3a9f7c21cd176343ca8ae0a3af11799761ee56e2de3cedd41b"
	I1129 09:29:50.845467   40298 cri.go:89] found id: "05767c9e32bf840a41366cf3cb2e44487bf62d2faa02a457e94274459c9b490a"
	I1129 09:29:50.845474   40298 cri.go:89] found id: "7555626b5cb53f89c622444b7a65f0d4e5204daa98e629811921ef3bd8259c26"
	I1129 09:29:50.845479   40298 cri.go:89] found id: ""
	I1129 09:29:50.845531   40298 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-893760 -n pause-893760
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-893760 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-893760 logs -n 25: (1.377634948s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p guest-872325 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                 │ guest-872325              │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │ 29 Nov 25 09:26 UTC │
	│ ssh     │ -p NoKubernetes-371904 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-371904       │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │                     │
	│ stop    │ -p NoKubernetes-371904                                                                                                                                                                                                  │ NoKubernetes-371904       │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │ 29 Nov 25 09:26 UTC │
	│ start   │ -p NoKubernetes-371904 --driver=kvm2  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-371904       │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │ 29 Nov 25 09:26 UTC │
	│ delete  │ -p kubernetes-upgrade-553896                                                                                                                                                                                            │ kubernetes-upgrade-553896 │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │ 29 Nov 25 09:26 UTC │
	│ start   │ -p force-systemd-env-743631 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                                │ force-systemd-env-743631  │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │ 29 Nov 25 09:27 UTC │
	│ start   │ -p force-systemd-flag-325714 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                               │ force-systemd-flag-325714 │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │ 29 Nov 25 09:27 UTC │
	│ ssh     │ -p NoKubernetes-371904 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-371904       │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │                     │
	│ delete  │ -p NoKubernetes-371904                                                                                                                                                                                                  │ NoKubernetes-371904       │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │ 29 Nov 25 09:26 UTC │
	│ start   │ -p cert-expiration-369885 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                    │ cert-expiration-369885    │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │ 29 Nov 25 09:28 UTC │
	│ delete  │ -p force-systemd-env-743631                                                                                                                                                                                             │ force-systemd-env-743631  │ jenkins │ v1.37.0 │ 29 Nov 25 09:27 UTC │ 29 Nov 25 09:27 UTC │
	│ start   │ -p cert-options-648964 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-648964       │ jenkins │ v1.37.0 │ 29 Nov 25 09:27 UTC │ 29 Nov 25 09:28 UTC │
	│ ssh     │ force-systemd-flag-325714 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                    │ force-systemd-flag-325714 │ jenkins │ v1.37.0 │ 29 Nov 25 09:27 UTC │ 29 Nov 25 09:27 UTC │
	│ delete  │ -p force-systemd-flag-325714                                                                                                                                                                                            │ force-systemd-flag-325714 │ jenkins │ v1.37.0 │ 29 Nov 25 09:27 UTC │ 29 Nov 25 09:27 UTC │
	│ start   │ -p pause-893760 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-893760              │ jenkins │ v1.37.0 │ 29 Nov 25 09:27 UTC │ 29 Nov 25 09:29 UTC │
	│ ssh     │ cert-options-648964 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-648964       │ jenkins │ v1.37.0 │ 29 Nov 25 09:28 UTC │ 29 Nov 25 09:28 UTC │
	│ ssh     │ -p cert-options-648964 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-648964       │ jenkins │ v1.37.0 │ 29 Nov 25 09:28 UTC │ 29 Nov 25 09:28 UTC │
	│ delete  │ -p cert-options-648964                                                                                                                                                                                                  │ cert-options-648964       │ jenkins │ v1.37.0 │ 29 Nov 25 09:28 UTC │ 29 Nov 25 09:28 UTC │
	│ start   │ -p stopped-upgrade-044628 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-044628    │ jenkins │ v1.35.0 │ 29 Nov 25 09:28 UTC │ 29 Nov 25 09:29 UTC │
	│ stop    │ stopped-upgrade-044628 stop                                                                                                                                                                                             │ stopped-upgrade-044628    │ jenkins │ v1.35.0 │ 29 Nov 25 09:29 UTC │ 29 Nov 25 09:29 UTC │
	│ start   │ -p stopped-upgrade-044628 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                  │ stopped-upgrade-044628    │ jenkins │ v1.37.0 │ 29 Nov 25 09:29 UTC │ 29 Nov 25 09:29 UTC │
	│ start   │ -p pause-893760 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-893760              │ jenkins │ v1.37.0 │ 29 Nov 25 09:29 UTC │ 29 Nov 25 09:30 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-044628 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ stopped-upgrade-044628    │ jenkins │ v1.37.0 │ 29 Nov 25 09:29 UTC │                     │
	│ delete  │ -p stopped-upgrade-044628                                                                                                                                                                                               │ stopped-upgrade-044628    │ jenkins │ v1.37.0 │ 29 Nov 25 09:29 UTC │ 29 Nov 25 09:29 UTC │
	│ start   │ -p auto-473168 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                                                                                   │ auto-473168               │ jenkins │ v1.37.0 │ 29 Nov 25 09:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:29:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:29:59.022401   40531 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:29:59.022676   40531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:29:59.022685   40531 out.go:374] Setting ErrFile to fd 2...
	I1129 09:29:59.022689   40531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:29:59.022912   40531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
	I1129 09:29:59.023389   40531 out.go:368] Setting JSON to false
	I1129 09:29:59.024282   40531 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4343,"bootTime":1764404256,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:29:59.024341   40531 start.go:143] virtualization: kvm guest
	I1129 09:29:59.026786   40531 out.go:179] * [auto-473168] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:29:59.028641   40531 notify.go:221] Checking for updates...
	I1129 09:29:59.028680   40531 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:29:59.030442   40531 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:29:59.031919   40531 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig
	I1129 09:29:59.033301   40531 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube
	I1129 09:29:59.034543   40531 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:29:59.035951   40531 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:29:59.037662   40531 config.go:182] Loaded profile config "cert-expiration-369885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:29:59.037738   40531 config.go:182] Loaded profile config "guest-872325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1129 09:29:59.037863   40531 config.go:182] Loaded profile config "pause-893760": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:29:59.037948   40531 config.go:182] Loaded profile config "running-upgrade-501515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1129 09:29:59.038039   40531 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:29:59.077526   40531 out.go:179] * Using the kvm2 driver based on user configuration
	I1129 09:29:59.078926   40531 start.go:309] selected driver: kvm2
	I1129 09:29:59.078940   40531 start.go:927] validating driver "kvm2" against <nil>
	I1129 09:29:59.078950   40531 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:29:59.079625   40531 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:29:59.079869   40531 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:29:59.079897   40531 cni.go:84] Creating CNI manager for ""
	I1129 09:29:59.079937   40531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1129 09:29:59.079945   40531 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1129 09:29:59.079982   40531 start.go:353] cluster config:
	{Name:auto-473168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-473168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Auto
PauseInterval:1m0s}
	I1129 09:29:59.080074   40531 iso.go:125] acquiring lock: {Name:mk0184b92a126aea44cd27d4836c247b817b0333 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:29:59.081496   40531 out.go:179] * Starting "auto-473168" primary control-plane node in "auto-473168" cluster
	I1129 09:29:59.082592   40531 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:29:59.082619   40531 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:29:59.082625   40531 cache.go:65] Caching tarball of preloaded images
	I1129 09:29:59.082708   40531 preload.go:238] Found /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:29:59.082719   40531 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:29:59.082812   40531 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/config.json ...
	I1129 09:29:59.082851   40531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/config.json: {Name:mkb2f106e8d4acad317b06f5df886bb1f9b2bb67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:29:59.082979   40531 start.go:360] acquireMachinesLock for auto-473168: {Name:mke0bd376b87e419ebada00803bbcbb9230316d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1129 09:29:59.083010   40531 start.go:364] duration metric: took 18.699µs to acquireMachinesLock for "auto-473168"
	I1129 09:29:59.083032   40531 start.go:93] Provisioning new machine with config: &{Name:auto-473168 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.34.1 ClusterName:auto-473168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:29:59.083099   40531 start.go:125] createHost starting for "" (driver="kvm2")
	I1129 09:29:57.082123   35232 logs.go:123] Gathering logs for etcd [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5] ...
	I1129 09:29:57.082160   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:29:57.125947   35232 logs.go:123] Gathering logs for kube-proxy [3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df] ...
	I1129 09:29:57.125984   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:29:57.168068   35232 logs.go:123] Gathering logs for container status ...
	I1129 09:29:57.168101   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:29:57.222774   35232 logs.go:123] Gathering logs for kubelet ...
	I1129 09:29:57.222871   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:29:57.331532   35232 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:29:57.331580   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:29:57.403700   35232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:29:57.403728   35232 logs.go:123] Gathering logs for coredns [c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a] ...
	I1129 09:29:57.403748   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:29:57.453928   35232 logs.go:123] Gathering logs for kube-proxy [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9] ...
	I1129 09:29:57.453967   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:29:57.525162   35232 logs.go:123] Gathering logs for kube-scheduler [a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126] ...
	I1129 09:29:57.525204   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:29:57.568113   35232 logs.go:123] Gathering logs for storage-provisioner [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666] ...
	I1129 09:29:57.568149   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:29:57.611335   35232 logs.go:123] Gathering logs for kube-controller-manager [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6] ...
	I1129 09:29:57.611369   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:29:57.656563   35232 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:29:57.656594   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:30:00.550760   35232 api_server.go:253] Checking apiserver healthz at https://192.168.72.99:8443/healthz ...
	I1129 09:30:00.551607   35232 api_server.go:269] stopped: https://192.168.72.99:8443/healthz: Get "https://192.168.72.99:8443/healthz": dial tcp 192.168.72.99:8443: connect: connection refused
	I1129 09:30:00.551722   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:30:00.551805   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:30:00.601757   35232 cri.go:89] found id: "d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:00.601784   35232 cri.go:89] found id: ""
	I1129 09:30:00.601796   35232 logs.go:282] 1 containers: [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d]
	I1129 09:30:00.601883   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.606491   35232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:30:00.606590   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:30:00.645604   35232 cri.go:89] found id: "2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:00.645633   35232 cri.go:89] found id: ""
	I1129 09:30:00.645644   35232 logs.go:282] 1 containers: [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5]
	I1129 09:30:00.645695   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.650938   35232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:30:00.651040   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:30:00.698953   35232 cri.go:89] found id: "5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:00.698975   35232 cri.go:89] found id: "c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:00.698979   35232 cri.go:89] found id: ""
	I1129 09:30:00.698989   35232 logs.go:282] 2 containers: [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a]
	I1129 09:30:00.699058   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.704079   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.709207   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:30:00.709312   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:30:00.747252   35232 cri.go:89] found id: "904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:00.747280   35232 cri.go:89] found id: "a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:00.747286   35232 cri.go:89] found id: ""
	I1129 09:30:00.747296   35232 logs.go:282] 2 containers: [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126]
	I1129 09:30:00.747361   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.752150   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.756718   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:30:00.756793   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:30:00.799717   35232 cri.go:89] found id: "3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:00.799747   35232 cri.go:89] found id: "3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:00.799756   35232 cri.go:89] found id: ""
	I1129 09:30:00.799766   35232 logs.go:282] 2 containers: [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df]
	I1129 09:30:00.799867   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.804621   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.808682   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:30:00.808764   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:30:00.859492   35232 cri.go:89] found id: "b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:00.859529   35232 cri.go:89] found id: ""
	I1129 09:30:00.859539   35232 logs.go:282] 1 containers: [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6]
	I1129 09:30:00.859598   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.865176   35232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:30:00.865254   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:30:00.904032   35232 cri.go:89] found id: ""
	I1129 09:30:00.904064   35232 logs.go:282] 0 containers: []
	W1129 09:30:00.904071   35232 logs.go:284] No container was found matching "kindnet"
	I1129 09:30:00.904077   35232 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:30:00.904130   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:30:00.941697   35232 cri.go:89] found id: "60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:00.941724   35232 cri.go:89] found id: ""
	I1129 09:30:00.941736   35232 logs.go:282] 1 containers: [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666]
	I1129 09:30:00.941796   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.947067   35232 logs.go:123] Gathering logs for dmesg ...
	I1129 09:30:00.947103   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:30:00.961976   35232 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:30:00.962007   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:30:01.037057   35232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:30:01.037102   35232 logs.go:123] Gathering logs for coredns [c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a] ...
	I1129 09:30:01.037120   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:01.079388   35232 logs.go:123] Gathering logs for kube-scheduler [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b] ...
	I1129 09:30:01.079417   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:01.174863   35232 logs.go:123] Gathering logs for kube-scheduler [a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126] ...
	I1129 09:30:01.174897   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:01.218909   35232 logs.go:123] Gathering logs for kube-proxy [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9] ...
	I1129 09:30:01.218941   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:01.270420   35232 logs.go:123] Gathering logs for kube-proxy [3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df] ...
	I1129 09:30:01.270467   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:01.310765   35232 logs.go:123] Gathering logs for kube-controller-manager [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6] ...
	I1129 09:30:01.310814   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:01.353703   35232 logs.go:123] Gathering logs for kubelet ...
	I1129 09:30:01.353734   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:30:01.456399   35232 logs.go:123] Gathering logs for kube-apiserver [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d] ...
	I1129 09:30:01.456438   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:01.500516   35232 logs.go:123] Gathering logs for etcd [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5] ...
	I1129 09:30:01.500554   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:01.547853   35232 logs.go:123] Gathering logs for coredns [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b] ...
	I1129 09:30:01.547896   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:01.601869   35232 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:30:01.601906   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:30:01.957212   35232 logs.go:123] Gathering logs for storage-provisioner [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666] ...
	I1129 09:30:01.957254   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:02.001158   35232 logs.go:123] Gathering logs for container status ...
	I1129 09:30:02.001187   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:29:58.347303   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1129 09:29:58.347371   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:29:59.084611   40531 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1129 09:29:59.084764   40531 start.go:159] libmachine.API.Create for "auto-473168" (driver="kvm2")
	I1129 09:29:59.084801   40531 client.go:173] LocalClient.Create starting
	I1129 09:29:59.084898   40531 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem
	I1129 09:29:59.084939   40531 main.go:143] libmachine: Decoding PEM data...
	I1129 09:29:59.084962   40531 main.go:143] libmachine: Parsing certificate...
	I1129 09:29:59.085040   40531 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem
	I1129 09:29:59.085071   40531 main.go:143] libmachine: Decoding PEM data...
	I1129 09:29:59.085089   40531 main.go:143] libmachine: Parsing certificate...
	I1129 09:29:59.085426   40531 main.go:143] libmachine: creating domain...
	I1129 09:29:59.085444   40531 main.go:143] libmachine: creating network...
	I1129 09:29:59.086914   40531 main.go:143] libmachine: found existing default network
	I1129 09:29:59.087164   40531 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1129 09:29:59.088154   40531 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6c:28:57} reservation:<nil>}
	I1129 09:29:59.089063   40531 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bfc930}
	I1129 09:29:59.089152   40531 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-auto-473168</name>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1129 09:29:59.095810   40531 main.go:143] libmachine: creating private network mk-auto-473168 192.168.50.0/24...
	I1129 09:29:59.178459   40531 main.go:143] libmachine: private network mk-auto-473168 192.168.50.0/24 created
	I1129 09:29:59.178793   40531 main.go:143] libmachine: <network>
	  <name>mk-auto-473168</name>
	  <uuid>cebc8e5d-2842-4160-862e-c2a1a73ad036</uuid>
	  <bridge name='virbr2' stp='on' delay='0'/>
	  <mac address='52:54:00:5c:7e:b3'/>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1129 09:29:59.178846   40531 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168 ...
	I1129 09:29:59.178873   40531 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22000-5651/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1129 09:29:59.178906   40531 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22000-5651/.minikube
	I1129 09:29:59.178995   40531 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22000-5651/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22000-5651/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1129 09:29:59.427038   40531 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/id_rsa...
	I1129 09:29:59.489929   40531 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/auto-473168.rawdisk...
	I1129 09:29:59.489973   40531 main.go:143] libmachine: Writing magic tar header
	I1129 09:29:59.490018   40531 main.go:143] libmachine: Writing SSH key tar header
	I1129 09:29:59.490096   40531 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168 ...
	I1129 09:29:59.490153   40531 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168
	I1129 09:29:59.490189   40531 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168 (perms=drwx------)
	I1129 09:29:59.490205   40531 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22000-5651/.minikube/machines
	I1129 09:29:59.490220   40531 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22000-5651/.minikube/machines (perms=drwxr-xr-x)
	I1129 09:29:59.490232   40531 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22000-5651/.minikube
	I1129 09:29:59.490240   40531 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22000-5651/.minikube (perms=drwxr-xr-x)
	I1129 09:29:59.490249   40531 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22000-5651
	I1129 09:29:59.490257   40531 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22000-5651 (perms=drwxrwxr-x)
	I1129 09:29:59.490266   40531 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1129 09:29:59.490274   40531 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1129 09:29:59.490285   40531 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1129 09:29:59.490292   40531 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1129 09:29:59.490300   40531 main.go:143] libmachine: checking permissions on dir: /home
	I1129 09:29:59.490306   40531 main.go:143] libmachine: skipping /home - not owner
	I1129 09:29:59.490313   40531 main.go:143] libmachine: defining domain...
	I1129 09:29:59.491769   40531 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>auto-473168</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/auto-473168.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-auto-473168'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1129 09:29:59.497398   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a5:a4:9a in network default
	I1129 09:29:59.497973   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:29:59.497993   40531 main.go:143] libmachine: starting domain...
	I1129 09:29:59.497997   40531 main.go:143] libmachine: ensuring networks are active...
	I1129 09:29:59.498853   40531 main.go:143] libmachine: Ensuring network default is active
	I1129 09:29:59.499243   40531 main.go:143] libmachine: Ensuring network mk-auto-473168 is active
	I1129 09:29:59.499908   40531 main.go:143] libmachine: getting domain XML...
	I1129 09:29:59.500931   40531 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>auto-473168</name>
	  <uuid>12b9578d-d9c7-4043-80ea-3410fd280c4f</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/auto-473168.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a2:da:2e'/>
	      <source network='mk-auto-473168'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:a5:a4:9a'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1129 09:30:00.866038   40531 main.go:143] libmachine: waiting for domain to start...
	I1129 09:30:00.867616   40531 main.go:143] libmachine: domain is now running
	I1129 09:30:00.867640   40531 main.go:143] libmachine: waiting for IP...
	I1129 09:30:00.868710   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:00.869444   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:00.869459   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:00.869855   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:00.869895   40531 retry.go:31] will retry after 258.067011ms: waiting for domain to come up
	I1129 09:30:01.129631   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:01.130538   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:01.130560   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:01.130984   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:01.131025   40531 retry.go:31] will retry after 285.642559ms: waiting for domain to come up
	I1129 09:30:01.418751   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:01.419384   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:01.419397   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:01.419811   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:01.419870   40531 retry.go:31] will retry after 482.162859ms: waiting for domain to come up
	I1129 09:30:01.903262   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:01.904058   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:01.904072   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:01.904437   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:01.904476   40531 retry.go:31] will retry after 590.074753ms: waiting for domain to come up
	I1129 09:30:02.496529   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:02.497291   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:02.497316   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:02.497695   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:02.497738   40531 retry.go:31] will retry after 498.758845ms: waiting for domain to come up
	I1129 09:30:02.998688   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:02.999492   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:02.999522   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:02.999906   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:02.999940   40531 retry.go:31] will retry after 892.428522ms: waiting for domain to come up
	I1129 09:30:03.894011   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:03.894618   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:03.894635   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:03.895032   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:03.895073   40531 retry.go:31] will retry after 1.071001925s: waiting for domain to come up
	I1129 09:30:04.553102   35232 api_server.go:253] Checking apiserver healthz at https://192.168.72.99:8443/healthz ...
	I1129 09:30:04.553699   35232 api_server.go:269] stopped: https://192.168.72.99:8443/healthz: Get "https://192.168.72.99:8443/healthz": dial tcp 192.168.72.99:8443: connect: connection refused
	I1129 09:30:04.553757   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:30:04.553857   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:30:04.593327   35232 cri.go:89] found id: "d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:04.593353   35232 cri.go:89] found id: ""
	I1129 09:30:04.593362   35232 logs.go:282] 1 containers: [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d]
	I1129 09:30:04.593427   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.597597   35232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:30:04.597670   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:30:04.634681   35232 cri.go:89] found id: "2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:04.634706   35232 cri.go:89] found id: ""
	I1129 09:30:04.634716   35232 logs.go:282] 1 containers: [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5]
	I1129 09:30:04.634794   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.640571   35232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:30:04.640664   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:30:04.681526   35232 cri.go:89] found id: "5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:04.681553   35232 cri.go:89] found id: "c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:04.681560   35232 cri.go:89] found id: ""
	I1129 09:30:04.681570   35232 logs.go:282] 2 containers: [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a]
	I1129 09:30:04.681634   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.686228   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.690729   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:30:04.690823   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:30:04.737669   35232 cri.go:89] found id: "904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:04.737692   35232 cri.go:89] found id: "a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:04.737698   35232 cri.go:89] found id: ""
	I1129 09:30:04.737707   35232 logs.go:282] 2 containers: [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126]
	I1129 09:30:04.737773   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.743040   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.748184   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:30:04.748252   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:30:04.788461   35232 cri.go:89] found id: "3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:04.788488   35232 cri.go:89] found id: "3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:04.788494   35232 cri.go:89] found id: ""
	I1129 09:30:04.788506   35232 logs.go:282] 2 containers: [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df]
	I1129 09:30:04.788600   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.795016   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.800315   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:30:04.800396   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:30:04.838636   35232 cri.go:89] found id: "b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:04.838667   35232 cri.go:89] found id: ""
	I1129 09:30:04.838678   35232 logs.go:282] 1 containers: [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6]
	I1129 09:30:04.838752   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.843352   35232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:30:04.843429   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:30:04.881659   35232 cri.go:89] found id: ""
	I1129 09:30:04.881700   35232 logs.go:282] 0 containers: []
	W1129 09:30:04.881712   35232 logs.go:284] No container was found matching "kindnet"
	I1129 09:30:04.881721   35232 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:30:04.881782   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:30:04.918462   35232 cri.go:89] found id: "60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:04.918489   35232 cri.go:89] found id: ""
	I1129 09:30:04.918500   35232 logs.go:282] 1 containers: [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666]
	I1129 09:30:04.918564   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.923029   35232 logs.go:123] Gathering logs for etcd [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5] ...
	I1129 09:30:04.923057   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:04.969643   35232 logs.go:123] Gathering logs for coredns [c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a] ...
	I1129 09:30:04.969671   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:05.011586   35232 logs.go:123] Gathering logs for kube-proxy [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9] ...
	I1129 09:30:05.011629   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:05.075558   35232 logs.go:123] Gathering logs for storage-provisioner [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666] ...
	I1129 09:30:05.075596   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:05.116843   35232 logs.go:123] Gathering logs for container status ...
	I1129 09:30:05.116874   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:30:05.163867   35232 logs.go:123] Gathering logs for kubelet ...
	I1129 09:30:05.163899   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:30:05.261927   35232 logs.go:123] Gathering logs for kube-scheduler [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b] ...
	I1129 09:30:05.261975   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:05.358603   35232 logs.go:123] Gathering logs for kube-proxy [3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df] ...
	I1129 09:30:05.358646   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:05.400236   35232 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:30:05.400269   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:30:05.740978   35232 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:30:05.741014   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:30:05.811481   35232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:30:05.811510   35232 logs.go:123] Gathering logs for kube-scheduler [a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126] ...
	I1129 09:30:05.811532   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:05.849963   35232 logs.go:123] Gathering logs for kube-controller-manager [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6] ...
	I1129 09:30:05.849996   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:05.897347   35232 logs.go:123] Gathering logs for dmesg ...
	I1129 09:30:05.897384   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:30:05.912214   35232 logs.go:123] Gathering logs for kube-apiserver [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d] ...
	I1129 09:30:05.912249   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:05.956575   35232 logs.go:123] Gathering logs for coredns [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b] ...
	I1129 09:30:05.956607   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:03.348490   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1129 09:30:03.348535   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:04.968236   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:04.968983   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:04.968999   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:04.969360   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:04.969394   40531 retry.go:31] will retry after 1.18871546s: waiting for domain to come up
	I1129 09:30:06.159423   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:06.160184   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:06.160205   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:06.160661   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:06.160717   40531 retry.go:31] will retry after 1.576835139s: waiting for domain to come up
	I1129 09:30:07.739409   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:07.740176   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:07.740196   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:07.740642   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:07.740678   40531 retry.go:31] will retry after 2.234982579s: waiting for domain to come up
	I1129 09:30:08.515350   35232 api_server.go:253] Checking apiserver healthz at https://192.168.72.99:8443/healthz ...
	I1129 09:30:08.516205   35232 api_server.go:269] stopped: https://192.168.72.99:8443/healthz: Get "https://192.168.72.99:8443/healthz": dial tcp 192.168.72.99:8443: connect: connection refused
	I1129 09:30:08.516261   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:30:08.516315   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:30:08.562042   35232 cri.go:89] found id: "d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:08.562069   35232 cri.go:89] found id: ""
	I1129 09:30:08.562080   35232 logs.go:282] 1 containers: [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d]
	I1129 09:30:08.562146   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.568910   35232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:30:08.569004   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:30:08.614625   35232 cri.go:89] found id: "2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:08.614661   35232 cri.go:89] found id: ""
	I1129 09:30:08.614673   35232 logs.go:282] 1 containers: [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5]
	I1129 09:30:08.614765   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.621168   35232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:30:08.621260   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:30:08.665974   35232 cri.go:89] found id: "5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:08.666010   35232 cri.go:89] found id: "c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:08.666016   35232 cri.go:89] found id: ""
	I1129 09:30:08.666024   35232 logs.go:282] 2 containers: [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a]
	I1129 09:30:08.666087   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.672471   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.679006   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:30:08.679097   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:30:08.722306   35232 cri.go:89] found id: "904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:08.722336   35232 cri.go:89] found id: "a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:08.722343   35232 cri.go:89] found id: ""
	I1129 09:30:08.722352   35232 logs.go:282] 2 containers: [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126]
	I1129 09:30:08.722425   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.727478   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.732128   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:30:08.732207   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:30:08.775146   35232 cri.go:89] found id: "3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:08.775173   35232 cri.go:89] found id: "3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:08.775179   35232 cri.go:89] found id: ""
	I1129 09:30:08.775188   35232 logs.go:282] 2 containers: [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df]
	I1129 09:30:08.775246   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.781486   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.785840   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:30:08.785921   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:30:08.825241   35232 cri.go:89] found id: "b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:08.825273   35232 cri.go:89] found id: ""
	I1129 09:30:08.825283   35232 logs.go:282] 1 containers: [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6]
	I1129 09:30:08.825355   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.830641   35232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:30:08.830717   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:30:08.868691   35232 cri.go:89] found id: ""
	I1129 09:30:08.868722   35232 logs.go:282] 0 containers: []
	W1129 09:30:08.868733   35232 logs.go:284] No container was found matching "kindnet"
	I1129 09:30:08.868741   35232 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:30:08.868848   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:30:08.913200   35232 cri.go:89] found id: "60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:08.913230   35232 cri.go:89] found id: ""
	I1129 09:30:08.913240   35232 logs.go:282] 1 containers: [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666]
	I1129 09:30:08.913309   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.917860   35232 logs.go:123] Gathering logs for kubelet ...
	I1129 09:30:08.917896   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:30:09.046616   35232 logs.go:123] Gathering logs for kube-apiserver [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d] ...
	I1129 09:30:09.046655   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:09.095604   35232 logs.go:123] Gathering logs for coredns [c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a] ...
	I1129 09:30:09.095659   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:09.134904   35232 logs.go:123] Gathering logs for storage-provisioner [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666] ...
	I1129 09:30:09.134944   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:09.181643   35232 logs.go:123] Gathering logs for coredns [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b] ...
	I1129 09:30:09.181680   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:09.231534   35232 logs.go:123] Gathering logs for kube-scheduler [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b] ...
	I1129 09:30:09.231571   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:09.319369   35232 logs.go:123] Gathering logs for kube-scheduler [a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126] ...
	I1129 09:30:09.319409   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:09.363625   35232 logs.go:123] Gathering logs for kube-proxy [3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df] ...
	I1129 09:30:09.363656   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:09.423019   35232 logs.go:123] Gathering logs for dmesg ...
	I1129 09:30:09.423050   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:30:09.442809   35232 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:30:09.442855   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:30:09.515736   35232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:30:09.515767   35232 logs.go:123] Gathering logs for etcd [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5] ...
	I1129 09:30:09.515787   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:09.571382   35232 logs.go:123] Gathering logs for kube-proxy [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9] ...
	I1129 09:30:09.571417   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:09.650842   35232 logs.go:123] Gathering logs for kube-controller-manager [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6] ...
	I1129 09:30:09.650900   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:09.704463   35232 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:30:09.704501   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:30:10.056788   35232 logs.go:123] Gathering logs for container status ...
	I1129 09:30:10.056844   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:30:08.349600   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1129 09:30:08.349666   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:12.202995   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": read tcp 192.168.83.1:51072->192.168.83.104:8443: read: connection reset by peer
	I1129 09:30:12.203070   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:12.203702   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": dial tcp 192.168.83.104:8443: connect: connection refused
	I1129 09:30:12.347023   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:12.347727   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": dial tcp 192.168.83.104:8443: connect: connection refused
	I1129 09:30:12.846959   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:12.847804   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": dial tcp 192.168.83.104:8443: connect: connection refused
	I1129 09:30:09.977305   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:09.978201   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:09.978229   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:09.978739   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:09.978788   40531 retry.go:31] will retry after 1.868339444s: waiting for domain to come up
	I1129 09:30:11.850107   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:11.850880   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:11.850905   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:11.851459   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:11.851508   40531 retry.go:31] will retry after 3.137454875s: waiting for domain to come up
	I1129 09:30:12.611397   35232 api_server.go:253] Checking apiserver healthz at https://192.168.72.99:8443/healthz ...
	I1129 09:30:12.612062   35232 api_server.go:269] stopped: https://192.168.72.99:8443/healthz: Get "https://192.168.72.99:8443/healthz": dial tcp 192.168.72.99:8443: connect: connection refused
	I1129 09:30:12.612123   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:30:12.612171   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:30:12.650044   35232 cri.go:89] found id: "d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:12.650068   35232 cri.go:89] found id: ""
	I1129 09:30:12.650077   35232 logs.go:282] 1 containers: [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d]
	I1129 09:30:12.650141   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.654621   35232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:30:12.654694   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:30:12.691402   35232 cri.go:89] found id: "2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:12.691430   35232 cri.go:89] found id: ""
	I1129 09:30:12.691438   35232 logs.go:282] 1 containers: [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5]
	I1129 09:30:12.691492   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.700758   35232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:30:12.700855   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:30:12.737195   35232 cri.go:89] found id: "5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:12.737216   35232 cri.go:89] found id: "c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:12.737221   35232 cri.go:89] found id: ""
	I1129 09:30:12.737228   35232 logs.go:282] 2 containers: [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a]
	I1129 09:30:12.737280   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.741355   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.745996   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:30:12.746071   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:30:12.783205   35232 cri.go:89] found id: "904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:12.783226   35232 cri.go:89] found id: "a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:12.783230   35232 cri.go:89] found id: ""
	I1129 09:30:12.783237   35232 logs.go:282] 2 containers: [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126]
	I1129 09:30:12.783288   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.787559   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.791452   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:30:12.791524   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:30:12.827692   35232 cri.go:89] found id: "3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:12.827724   35232 cri.go:89] found id: "3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:12.827731   35232 cri.go:89] found id: ""
	I1129 09:30:12.827741   35232 logs.go:282] 2 containers: [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df]
	I1129 09:30:12.827804   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.832115   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.836391   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:30:12.836470   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:30:12.870447   35232 cri.go:89] found id: "b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:12.870470   35232 cri.go:89] found id: ""
	I1129 09:30:12.870482   35232 logs.go:282] 1 containers: [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6]
	I1129 09:30:12.870547   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.875072   35232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:30:12.875150   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:30:12.909258   35232 cri.go:89] found id: ""
	I1129 09:30:12.909284   35232 logs.go:282] 0 containers: []
	W1129 09:30:12.909291   35232 logs.go:284] No container was found matching "kindnet"
	I1129 09:30:12.909297   35232 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:30:12.909356   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:30:12.946080   35232 cri.go:89] found id: "60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:12.946117   35232 cri.go:89] found id: ""
	I1129 09:30:12.946127   35232 logs.go:282] 1 containers: [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666]
	I1129 09:30:12.946197   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.950511   35232 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:30:12.950534   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:30:13.289341   35232 logs.go:123] Gathering logs for kubelet ...
	I1129 09:30:13.289377   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:30:13.390163   35232 logs.go:123] Gathering logs for kube-scheduler [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b] ...
	I1129 09:30:13.390199   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:13.480719   35232 logs.go:123] Gathering logs for kube-controller-manager [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6] ...
	I1129 09:30:13.480759   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:13.517823   35232 logs.go:123] Gathering logs for kube-apiserver [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d] ...
	I1129 09:30:13.517867   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:13.559328   35232 logs.go:123] Gathering logs for etcd [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5] ...
	I1129 09:30:13.559382   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:13.605760   35232 logs.go:123] Gathering logs for coredns [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b] ...
	I1129 09:30:13.605799   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:13.667862   35232 logs.go:123] Gathering logs for storage-provisioner [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666] ...
	I1129 09:30:13.667915   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:13.724161   35232 logs.go:123] Gathering logs for container status ...
	I1129 09:30:13.724203   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:30:13.771564   35232 logs.go:123] Gathering logs for coredns [c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a] ...
	I1129 09:30:13.771605   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:13.809153   35232 logs.go:123] Gathering logs for kube-proxy [3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df] ...
	I1129 09:30:13.809190   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:13.853511   35232 logs.go:123] Gathering logs for dmesg ...
	I1129 09:30:13.853543   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:30:13.869414   35232 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:30:13.869448   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:30:13.944782   35232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:30:13.944811   35232 logs.go:123] Gathering logs for kube-scheduler [a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126] ...
	I1129 09:30:13.944842   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:13.982026   35232 logs.go:123] Gathering logs for kube-proxy [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9] ...
	I1129 09:30:13.982061   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:16.536933   35232 api_server.go:253] Checking apiserver healthz at https://192.168.72.99:8443/healthz ...
	I1129 09:30:16.537672   35232 api_server.go:269] stopped: https://192.168.72.99:8443/healthz: Get "https://192.168.72.99:8443/healthz": dial tcp 192.168.72.99:8443: connect: connection refused
	I1129 09:30:16.537733   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:30:16.537793   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:30:16.583935   35232 cri.go:89] found id: "d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:16.583951   35232 cri.go:89] found id: ""
	I1129 09:30:16.583961   35232 logs.go:282] 1 containers: [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d]
	I1129 09:30:16.584010   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.588618   35232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:30:16.588689   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:30:16.630951   35232 cri.go:89] found id: "2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:16.630972   35232 cri.go:89] found id: ""
	I1129 09:30:16.630980   35232 logs.go:282] 1 containers: [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5]
	I1129 09:30:16.631036   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.635823   35232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:30:16.635911   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:30:16.681389   35232 cri.go:89] found id: "5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:16.681416   35232 cri.go:89] found id: "c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:16.681423   35232 cri.go:89] found id: ""
	I1129 09:30:16.681431   35232 logs.go:282] 2 containers: [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a]
	I1129 09:30:16.681490   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.685871   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.689817   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:30:16.689908   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:30:16.735861   35232 cri.go:89] found id: "904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:16.735882   35232 cri.go:89] found id: "a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:16.735887   35232 cri.go:89] found id: ""
	I1129 09:30:16.735895   35232 logs.go:282] 2 containers: [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126]
	I1129 09:30:16.735952   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.740797   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.745955   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:30:16.746033   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:30:16.794503   35232 cri.go:89] found id: "3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:16.794548   35232 cri.go:89] found id: "3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:16.794554   35232 cri.go:89] found id: ""
	I1129 09:30:16.794564   35232 logs.go:282] 2 containers: [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df]
	I1129 09:30:16.794621   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.799130   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.803159   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:30:16.803228   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:30:16.846625   35232 cri.go:89] found id: "b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:16.846646   35232 cri.go:89] found id: ""
	I1129 09:30:16.846655   35232 logs.go:282] 1 containers: [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6]
	I1129 09:30:16.846705   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.851012   35232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:30:16.851082   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:30:16.899816   35232 cri.go:89] found id: ""
	I1129 09:30:16.899854   35232 logs.go:282] 0 containers: []
	W1129 09:30:16.899862   35232 logs.go:284] No container was found matching "kindnet"
	I1129 09:30:16.899869   35232 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:30:16.899923   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:30:16.949008   35232 cri.go:89] found id: "60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:16.949035   35232 cri.go:89] found id: ""
	I1129 09:30:16.949045   35232 logs.go:282] 1 containers: [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666]
	I1129 09:30:16.949111   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.954816   35232 logs.go:123] Gathering logs for dmesg ...
	I1129 09:30:16.954867   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:30:16.973593   35232 logs.go:123] Gathering logs for kube-proxy [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9] ...
	I1129 09:30:16.973633   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:13.347387   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:13.348076   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": dial tcp 192.168.83.104:8443: connect: connection refused
	I1129 09:30:13.847864   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:13.848640   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": dial tcp 192.168.83.104:8443: connect: connection refused
	I1129 09:30:14.347306   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:16.095772   40298 api_server.go:279] https://192.168.83.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1129 09:30:16.095797   40298 api_server.go:103] status: https://192.168.83.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1129 09:30:16.095810   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:16.128362   40298 api_server.go:279] https://192.168.83.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1129 09:30:16.128393   40298 api_server.go:103] status: https://192.168.83.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1129 09:30:16.347795   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:16.354741   40298 api_server.go:279] https://192.168.83.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:30:16.354775   40298 api_server.go:103] status: https://192.168.83.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:30:16.847194   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:16.852745   40298 api_server.go:279] https://192.168.83.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:30:16.852770   40298 api_server.go:103] status: https://192.168.83.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:30:17.347403   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:17.354266   40298 api_server.go:279] https://192.168.83.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:30:17.354295   40298 api_server.go:103] status: https://192.168.83.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:30:17.846939   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:17.852114   40298 api_server.go:279] https://192.168.83.104:8443/healthz returned 200:
	ok
	I1129 09:30:17.860756   40298 api_server.go:141] control plane version: v1.34.1
	I1129 09:30:17.860790   40298 api_server.go:131] duration metric: took 24.513902411s to wait for apiserver health ...
	I1129 09:30:17.860802   40298 cni.go:84] Creating CNI manager for ""
	I1129 09:30:17.860812   40298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1129 09:30:17.862794   40298 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1129 09:30:17.864382   40298 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1129 09:30:17.887645   40298 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1129 09:30:17.915179   40298 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:30:17.930946   40298 system_pods.go:59] 6 kube-system pods found
	I1129 09:30:17.930991   40298 system_pods.go:61] "coredns-66bc5c9577-4bmms" [64220006-2ede-426c-bd55-8a0c72981851] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:30:17.931002   40298 system_pods.go:61] "etcd-pause-893760" [e4f015d5-b1a6-4405-b118-9db7b7341c41] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:30:17.931012   40298 system_pods.go:61] "kube-apiserver-pause-893760" [3fea2b50-f890-473d-969e-0ff61c070432] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:30:17.931023   40298 system_pods.go:61] "kube-controller-manager-pause-893760" [cdf18de5-80b4-431a-9287-71bbef4a21b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:30:17.931030   40298 system_pods.go:61] "kube-proxy-rzkwr" [8d0fdc57-ce2f-483b-82f2-006931b3ab39] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 09:30:17.931037   40298 system_pods.go:61] "kube-scheduler-pause-893760" [fcb17e31-c1eb-4490-9ff2-f3ad36f7b4a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:30:17.931045   40298 system_pods.go:74] duration metric: took 15.840219ms to wait for pod list to return data ...
	I1129 09:30:17.931054   40298 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:30:17.949644   40298 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1129 09:30:17.949681   40298 node_conditions.go:123] node cpu capacity is 2
	I1129 09:30:17.949704   40298 node_conditions.go:105] duration metric: took 18.643586ms to run NodePressure ...
	I1129 09:30:17.949770   40298 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1129 09:30:18.324785   40298 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1129 09:30:18.329443   40298 kubeadm.go:744] kubelet initialised
	I1129 09:30:18.329468   40298 kubeadm.go:745] duration metric: took 4.654312ms waiting for restarted kubelet to initialise ...
	I1129 09:30:18.329487   40298 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:30:18.353236   40298 ops.go:34] apiserver oom_adj: -16
	I1129 09:30:18.353267   40298 kubeadm.go:602] duration metric: took 27.388549197s to restartPrimaryControlPlane
	I1129 09:30:18.353279   40298 kubeadm.go:403] duration metric: took 27.658955597s to StartCluster
	I1129 09:30:18.353299   40298 settings.go:142] acquiring lock: {Name:mkb0bfd7d63d07772bc8411985c986880254a5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:18.353410   40298 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5651/kubeconfig
	I1129 09:30:18.354989   40298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/kubeconfig: {Name:mk06369260b11b7542906282ff812e026bce8478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:18.355302   40298 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.104 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:30:18.355391   40298 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:30:18.355594   40298 config.go:182] Loaded profile config "pause-893760": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:30:18.358261   40298 out.go:179] * Verifying Kubernetes components...
	I1129 09:30:18.358290   40298 out.go:179] * Enabled addons: 
	I1129 09:30:14.990078   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:14.990922   40531 main.go:143] libmachine: domain auto-473168 has current primary IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:14.990938   40531 main.go:143] libmachine: found domain IP: 192.168.50.142
	I1129 09:30:14.990945   40531 main.go:143] libmachine: reserving static IP address...
	I1129 09:30:14.991398   40531 main.go:143] libmachine: unable to find host DHCP lease matching {name: "auto-473168", mac: "52:54:00:a2:da:2e", ip: "192.168.50.142"} in network mk-auto-473168
	I1129 09:30:15.237104   40531 main.go:143] libmachine: reserved static IP address 192.168.50.142 for domain auto-473168
	I1129 09:30:15.237132   40531 main.go:143] libmachine: waiting for SSH...
	I1129 09:30:15.237148   40531 main.go:143] libmachine: Getting to WaitForSSH function...
	I1129 09:30:15.240985   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.241605   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:15.241635   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.241890   40531 main.go:143] libmachine: Using SSH client type: native
	I1129 09:30:15.242127   40531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1129 09:30:15.242139   40531 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1129 09:30:15.352530   40531 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:30:15.353087   40531 main.go:143] libmachine: domain creation complete
	I1129 09:30:15.355012   40531 machine.go:94] provisionDockerMachine start ...
	I1129 09:30:15.357987   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.358462   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:15.358491   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.358682   40531 main.go:143] libmachine: Using SSH client type: native
	I1129 09:30:15.358977   40531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1129 09:30:15.358999   40531 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:30:15.469789   40531 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1129 09:30:15.469862   40531 buildroot.go:166] provisioning hostname "auto-473168"
	I1129 09:30:15.473205   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.473757   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:15.473807   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.474061   40531 main.go:143] libmachine: Using SSH client type: native
	I1129 09:30:15.474306   40531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1129 09:30:15.474326   40531 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-473168 && echo "auto-473168" | sudo tee /etc/hostname
	I1129 09:30:15.609354   40531 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-473168
	
	I1129 09:30:15.613239   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.613747   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:15.613792   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.614045   40531 main.go:143] libmachine: Using SSH client type: native
	I1129 09:30:15.614352   40531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1129 09:30:15.614378   40531 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-473168' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-473168/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-473168' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:30:15.731159   40531 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:30:15.731209   40531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5651/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5651/.minikube}
	I1129 09:30:15.731254   40531 buildroot.go:174] setting up certificates
	I1129 09:30:15.731269   40531 provision.go:84] configureAuth start
	I1129 09:30:15.734244   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.734670   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:15.734693   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.737304   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.737747   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:15.737774   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.737962   40531 provision.go:143] copyHostCerts
	I1129 09:30:15.738033   40531 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5651/.minikube/ca.pem, removing ...
	I1129 09:30:15.738048   40531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5651/.minikube/ca.pem
	I1129 09:30:15.738141   40531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5651/.minikube/ca.pem (1082 bytes)
	I1129 09:30:15.738245   40531 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5651/.minikube/cert.pem, removing ...
	I1129 09:30:15.738260   40531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5651/.minikube/cert.pem
	I1129 09:30:15.738290   40531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5651/.minikube/cert.pem (1123 bytes)
	I1129 09:30:15.738349   40531 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5651/.minikube/key.pem, removing ...
	I1129 09:30:15.738356   40531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5651/.minikube/key.pem
	I1129 09:30:15.738378   40531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5651/.minikube/key.pem (1679 bytes)
	I1129 09:30:15.738442   40531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5651/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca-key.pem org=jenkins.auto-473168 san=[127.0.0.1 192.168.50.142 auto-473168 localhost minikube]
	I1129 09:30:15.837963   40531 provision.go:177] copyRemoteCerts
	I1129 09:30:15.838043   40531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:30:15.841894   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.842402   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:15.842449   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.842635   40531 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/id_rsa Username:docker}
	I1129 09:30:15.932336   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:30:15.968366   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 09:30:16.006925   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1129 09:30:16.040753   40531 provision.go:87] duration metric: took 309.466886ms to configureAuth
	I1129 09:30:16.040784   40531 buildroot.go:189] setting minikube options for container-runtime
	I1129 09:30:16.040988   40531 config.go:182] Loaded profile config "auto-473168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:30:16.044568   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.045175   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:16.045203   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.045440   40531 main.go:143] libmachine: Using SSH client type: native
	I1129 09:30:16.045788   40531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1129 09:30:16.045821   40531 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:30:16.313142   40531 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:30:16.313173   40531 machine.go:97] duration metric: took 958.140926ms to provisionDockerMachine
	I1129 09:30:16.313189   40531 client.go:176] duration metric: took 17.228376368s to LocalClient.Create
	I1129 09:30:16.313210   40531 start.go:167] duration metric: took 17.228446593s to libmachine.API.Create "auto-473168"
	I1129 09:30:16.313221   40531 start.go:293] postStartSetup for "auto-473168" (driver="kvm2")
	I1129 09:30:16.313234   40531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:30:16.313316   40531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:30:16.317190   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.317844   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:16.317885   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.318111   40531 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/id_rsa Username:docker}
	I1129 09:30:16.404732   40531 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:30:16.409948   40531 info.go:137] Remote host: Buildroot 2025.02
	I1129 09:30:16.409984   40531 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5651/.minikube/addons for local assets ...
	I1129 09:30:16.410055   40531 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5651/.minikube/files for local assets ...
	I1129 09:30:16.410130   40531 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5651/.minikube/files/etc/ssl/certs/96132.pem -> 96132.pem in /etc/ssl/certs
	I1129 09:30:16.410277   40531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:30:16.423910   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/files/etc/ssl/certs/96132.pem --> /etc/ssl/certs/96132.pem (1708 bytes)
	I1129 09:30:16.455139   40531 start.go:296] duration metric: took 141.90363ms for postStartSetup
	I1129 09:30:16.459469   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.460716   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:16.460750   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.461166   40531 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/config.json ...
	I1129 09:30:16.461374   40531 start.go:128] duration metric: took 17.378264826s to createHost
	I1129 09:30:16.464280   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.464775   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:16.464801   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.464980   40531 main.go:143] libmachine: Using SSH client type: native
	I1129 09:30:16.465198   40531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1129 09:30:16.465214   40531 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1129 09:30:16.573908   40531 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764408616.532817684
	
	I1129 09:30:16.573941   40531 fix.go:216] guest clock: 1764408616.532817684
	I1129 09:30:16.573951   40531 fix.go:229] Guest: 2025-11-29 09:30:16.532817684 +0000 UTC Remote: 2025-11-29 09:30:16.461396315 +0000 UTC m=+17.492663956 (delta=71.421369ms)
	I1129 09:30:16.573972   40531 fix.go:200] guest clock delta is within tolerance: 71.421369ms
	I1129 09:30:16.573979   40531 start.go:83] releasing machines lock for "auto-473168", held for 17.49095756s
	I1129 09:30:16.577459   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.578120   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:16.578155   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.578784   40531 ssh_runner.go:195] Run: cat /version.json
	I1129 09:30:16.578816   40531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:30:16.582709   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.582927   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.583272   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:16.583305   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:16.583308   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.583335   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.583559   40531 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/id_rsa Username:docker}
	I1129 09:30:16.583561   40531 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/id_rsa Username:docker}
	I1129 09:30:16.695109   40531 ssh_runner.go:195] Run: systemctl --version
	I1129 09:30:16.703078   40531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:30:16.874865   40531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:30:16.881861   40531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:30:16.881975   40531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:30:16.912248   40531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:30:16.912281   40531 start.go:496] detecting cgroup driver to use...
	I1129 09:30:16.912362   40531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:30:16.934073   40531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:30:16.962395   40531 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:30:16.962472   40531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:30:16.982957   40531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:30:17.000541   40531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:30:17.220267   40531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:30:17.468862   40531 docker.go:234] disabling docker service ...
	I1129 09:30:17.468928   40531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:30:17.488703   40531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:30:17.507554   40531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:30:17.730483   40531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:30:17.926142   40531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:30:17.947566   40531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:30:17.979464   40531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:30:17.979571   40531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:30:17.993622   40531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 09:30:17.993695   40531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:30:18.010484   40531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:30:18.026166   40531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:30:18.040388   40531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:30:18.055169   40531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:30:18.068695   40531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:30:18.093322   40531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:30:18.107183   40531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:30:18.121797   40531 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1129 09:30:18.121887   40531 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1129 09:30:18.144951   40531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:30:18.161321   40531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:30:18.340687   40531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:30:18.481488   40531 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:30:18.481582   40531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:30:18.488806   40531 start.go:564] Will wait 60s for crictl version
	I1129 09:30:18.488893   40531 ssh_runner.go:195] Run: which crictl
	I1129 09:30:18.493636   40531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1129 09:30:18.536697   40531 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1129 09:30:18.536787   40531 ssh_runner.go:195] Run: crio --version
	I1129 09:30:18.573081   40531 ssh_runner.go:195] Run: crio --version
	I1129 09:30:18.607893   40531 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1129 09:30:18.612851   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:18.613441   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:18.613478   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:18.613780   40531 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1129 09:30:18.620120   40531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:30:18.637054   40531 kubeadm.go:884] updating cluster {Name:auto-473168 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:auto-473168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.142 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:30:18.637252   40531 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:30:18.637320   40531 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:30:18.678077   40531 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1129 09:30:18.678166   40531 ssh_runner.go:195] Run: which lz4
	I1129 09:30:18.683018   40531 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1129 09:30:18.688160   40531 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1129 09:30:18.688190   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1129 09:30:17.032097   35232 logs.go:123] Gathering logs for kube-controller-manager [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6] ...
	I1129 09:30:17.032140   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:17.080334   35232 logs.go:123] Gathering logs for kube-apiserver [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d] ...
	I1129 09:30:17.080376   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:17.124940   35232 logs.go:123] Gathering logs for coredns [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b] ...
	I1129 09:30:17.124976   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:17.192543   35232 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:30:17.192593   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:30:17.644467   35232 logs.go:123] Gathering logs for kubelet ...
	I1129 09:30:17.644521   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:30:17.783135   35232 logs.go:123] Gathering logs for etcd [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5] ...
	I1129 09:30:17.783179   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:17.827955   35232 logs.go:123] Gathering logs for kube-scheduler [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b] ...
	I1129 09:30:17.827998   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:17.936342   35232 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:30:17.936395   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:30:18.040468   35232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:30:18.040486   35232 logs.go:123] Gathering logs for coredns [c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a] ...
	I1129 09:30:18.040503   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:18.095107   35232 logs.go:123] Gathering logs for kube-scheduler [a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126] ...
	I1129 09:30:18.095147   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:18.151486   35232 logs.go:123] Gathering logs for kube-proxy [3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df] ...
	I1129 09:30:18.151528   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:18.197722   35232 logs.go:123] Gathering logs for storage-provisioner [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666] ...
	I1129 09:30:18.197779   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:18.257050   35232 logs.go:123] Gathering logs for container status ...
	I1129 09:30:18.257088   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:30:20.818925   35232 api_server.go:253] Checking apiserver healthz at https://192.168.72.99:8443/healthz ...
	I1129 09:30:20.819701   35232 api_server.go:269] stopped: https://192.168.72.99:8443/healthz: Get "https://192.168.72.99:8443/healthz": dial tcp 192.168.72.99:8443: connect: connection refused
	I1129 09:30:20.819776   35232 kubeadm.go:602] duration metric: took 4m18.218810899s to restartPrimaryControlPlane
	W1129 09:30:20.819857   35232 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1129 09:30:20.819917   35232 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1129 09:30:18.359578   40298 addons.go:530] duration metric: took 4.197342ms for enable addons: enabled=[]
	I1129 09:30:18.359630   40298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:30:18.605921   40298 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:30:18.646410   40298 node_ready.go:35] waiting up to 6m0s for node "pause-893760" to be "Ready" ...
	I1129 09:30:18.651110   40298 node_ready.go:49] node "pause-893760" is "Ready"
	I1129 09:30:18.651149   40298 node_ready.go:38] duration metric: took 4.696684ms for node "pause-893760" to be "Ready" ...
	I1129 09:30:18.651169   40298 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:30:18.651240   40298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:30:18.682538   40298 api_server.go:72] duration metric: took 327.201087ms to wait for apiserver process to appear ...
	I1129 09:30:18.682561   40298 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:30:18.682583   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:18.691277   40298 api_server.go:279] https://192.168.83.104:8443/healthz returned 200:
	ok
	I1129 09:30:18.693021   40298 api_server.go:141] control plane version: v1.34.1
	I1129 09:30:18.693055   40298 api_server.go:131] duration metric: took 10.485429ms to wait for apiserver health ...
	I1129 09:30:18.693066   40298 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:30:18.699543   40298 system_pods.go:59] 6 kube-system pods found
	I1129 09:30:18.699582   40298 system_pods.go:61] "coredns-66bc5c9577-4bmms" [64220006-2ede-426c-bd55-8a0c72981851] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:30:18.699593   40298 system_pods.go:61] "etcd-pause-893760" [e4f015d5-b1a6-4405-b118-9db7b7341c41] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:30:18.699603   40298 system_pods.go:61] "kube-apiserver-pause-893760" [3fea2b50-f890-473d-969e-0ff61c070432] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:30:18.699613   40298 system_pods.go:61] "kube-controller-manager-pause-893760" [cdf18de5-80b4-431a-9287-71bbef4a21b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:30:18.699618   40298 system_pods.go:61] "kube-proxy-rzkwr" [8d0fdc57-ce2f-483b-82f2-006931b3ab39] Running
	I1129 09:30:18.699625   40298 system_pods.go:61] "kube-scheduler-pause-893760" [fcb17e31-c1eb-4490-9ff2-f3ad36f7b4a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:30:18.699634   40298 system_pods.go:74] duration metric: took 6.561137ms to wait for pod list to return data ...
	I1129 09:30:18.699644   40298 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:30:18.704131   40298 default_sa.go:45] found service account: "default"
	I1129 09:30:18.704160   40298 default_sa.go:55] duration metric: took 4.507979ms for default service account to be created ...
	I1129 09:30:18.704174   40298 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:30:18.709864   40298 system_pods.go:86] 6 kube-system pods found
	I1129 09:30:18.709896   40298 system_pods.go:89] "coredns-66bc5c9577-4bmms" [64220006-2ede-426c-bd55-8a0c72981851] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:30:18.709908   40298 system_pods.go:89] "etcd-pause-893760" [e4f015d5-b1a6-4405-b118-9db7b7341c41] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:30:18.709916   40298 system_pods.go:89] "kube-apiserver-pause-893760" [3fea2b50-f890-473d-969e-0ff61c070432] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:30:18.709924   40298 system_pods.go:89] "kube-controller-manager-pause-893760" [cdf18de5-80b4-431a-9287-71bbef4a21b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:30:18.709929   40298 system_pods.go:89] "kube-proxy-rzkwr" [8d0fdc57-ce2f-483b-82f2-006931b3ab39] Running
	I1129 09:30:18.709937   40298 system_pods.go:89] "kube-scheduler-pause-893760" [fcb17e31-c1eb-4490-9ff2-f3ad36f7b4a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:30:18.709947   40298 system_pods.go:126] duration metric: took 5.765488ms to wait for k8s-apps to be running ...
	I1129 09:30:18.709957   40298 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:30:18.710013   40298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:30:18.737914   40298 system_svc.go:56] duration metric: took 27.945973ms WaitForService to wait for kubelet
	I1129 09:30:18.737944   40298 kubeadm.go:587] duration metric: took 382.610289ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:30:18.737959   40298 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:30:18.741786   40298 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1129 09:30:18.741808   40298 node_conditions.go:123] node cpu capacity is 2
	I1129 09:30:18.741817   40298 node_conditions.go:105] duration metric: took 3.853022ms to run NodePressure ...
	I1129 09:30:18.741849   40298 start.go:242] waiting for startup goroutines ...
	I1129 09:30:18.741859   40298 start.go:247] waiting for cluster config update ...
	I1129 09:30:18.741869   40298 start.go:256] writing updated cluster config ...
	I1129 09:30:18.742144   40298 ssh_runner.go:195] Run: rm -f paused
	I1129 09:30:18.748084   40298 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:30:18.748970   40298 kapi.go:59] client config for pause-893760: &rest.Config{Host:"https://192.168.83.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22000-5651/.minikube/profiles/pause-893760/client.crt", KeyFile:"/home/jenkins/minikube-integration/22000-5651/.minikube/profiles/pause-893760/client.key", CAFile:"/home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1129 09:30:18.753403   40298 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4bmms" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:19.761324   40298 pod_ready.go:94] pod "coredns-66bc5c9577-4bmms" is "Ready"
	I1129 09:30:19.761362   40298 pod_ready.go:86] duration metric: took 1.007924984s for pod "coredns-66bc5c9577-4bmms" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:19.769111   40298 pod_ready.go:83] waiting for pod "etcd-pause-893760" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 09:30:21.776627   40298 pod_ready.go:104] pod "etcd-pause-893760" is not "Ready", error: <nil>
	I1129 09:30:20.283449   40531 crio.go:462] duration metric: took 1.600445201s to copy over tarball
	I1129 09:30:20.283574   40531 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1129 09:30:22.020620   40531 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.737008634s)
	I1129 09:30:22.020659   40531 crio.go:469] duration metric: took 1.737163229s to extract the tarball
	I1129 09:30:22.020670   40531 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1129 09:30:22.066503   40531 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:30:22.118112   40531 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:30:22.118139   40531 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:30:22.118149   40531 kubeadm.go:935] updating node { 192.168.50.142 8443 v1.34.1 crio true true} ...
	I1129 09:30:22.118253   40531 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-473168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-473168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:30:22.118330   40531 ssh_runner.go:195] Run: crio config
	I1129 09:30:22.170237   40531 cni.go:84] Creating CNI manager for ""
	I1129 09:30:22.170266   40531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1129 09:30:22.170284   40531 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:30:22.170307   40531 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.142 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-473168 NodeName:auto-473168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:30:22.170470   40531 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-473168"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.142"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.142"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:30:22.170538   40531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:30:22.183836   40531 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:30:22.183902   40531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:30:22.196062   40531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1129 09:30:22.218640   40531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:30:22.240655   40531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1129 09:30:22.263705   40531 ssh_runner.go:195] Run: grep 192.168.50.142	control-plane.minikube.internal$ /etc/hosts
	I1129 09:30:22.268507   40531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:30:22.285554   40531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:30:22.457149   40531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:30:22.482545   40531 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168 for IP: 192.168.50.142
	I1129 09:30:22.482567   40531 certs.go:195] generating shared ca certs ...
	I1129 09:30:22.482583   40531 certs.go:227] acquiring lock for ca certs: {Name:mk263acc791d5a2c77504c81548ce554781ff9eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:22.482744   40531 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5651/.minikube/ca.key
	I1129 09:30:22.482785   40531 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.key
	I1129 09:30:22.482792   40531 certs.go:257] generating profile certs ...
	I1129 09:30:22.482876   40531 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.key
	I1129 09:30:22.482890   40531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt with IP's: []
	I1129 09:30:22.645863   40531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt ...
	I1129 09:30:22.645892   40531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: {Name:mk293d20ece963a3fdd9eef1ebb9b8ff8cae849d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:22.646065   40531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.key ...
	I1129 09:30:22.646076   40531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.key: {Name:mk2d0cfd80cc68c78b1a019a43e17f4a2d89ced5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:22.646153   40531 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.key.69d217c5
	I1129 09:30:22.646168   40531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.crt.69d217c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.142]
	I1129 09:30:22.722331   40531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.crt.69d217c5 ...
	I1129 09:30:22.722360   40531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.crt.69d217c5: {Name:mkdf3d4714b22705338cbe8f7750f3230b03791b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:22.722524   40531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.key.69d217c5 ...
	I1129 09:30:22.722539   40531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.key.69d217c5: {Name:mk494412e04878075c93b21456db16692b1823af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:22.722623   40531 certs.go:382] copying /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.crt.69d217c5 -> /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.crt
	I1129 09:30:22.722704   40531 certs.go:386] copying /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.key.69d217c5 -> /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.key
	I1129 09:30:22.722757   40531 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/proxy-client.key
	I1129 09:30:22.722768   40531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/proxy-client.crt with IP's: []
	I1129 09:30:22.832789   40531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/proxy-client.crt ...
	I1129 09:30:22.832815   40531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/proxy-client.crt: {Name:mkffcbc42b8fa26a5b25c89183d999a2f1f5010f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:22.832977   40531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/proxy-client.key ...
	I1129 09:30:22.832989   40531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/proxy-client.key: {Name:mkc7c5e824a41d56bc8478b0326edc3a0a8df5f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:22.833159   40531 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/9613.pem (1338 bytes)
	W1129 09:30:22.833198   40531 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5651/.minikube/certs/9613_empty.pem, impossibly tiny 0 bytes
	I1129 09:30:22.833210   40531 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:30:22.833236   40531 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem (1082 bytes)
	I1129 09:30:22.833260   40531 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:30:22.833287   40531 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/key.pem (1679 bytes)
	I1129 09:30:22.833328   40531 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/files/etc/ssl/certs/96132.pem (1708 bytes)
	I1129 09:30:22.833962   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:30:22.866077   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:30:22.895365   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:30:22.926738   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:30:22.958648   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1129 09:30:22.990548   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:30:23.021795   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:30:23.054141   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:30:23.086282   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/certs/9613.pem --> /usr/share/ca-certificates/9613.pem (1338 bytes)
	I1129 09:30:23.117060   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/files/etc/ssl/certs/96132.pem --> /usr/share/ca-certificates/96132.pem (1708 bytes)
	I1129 09:30:23.147790   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:30:23.184294   40531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:30:23.210443   40531 ssh_runner.go:195] Run: openssl version
	I1129 09:30:23.217949   40531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9613.pem && ln -fs /usr/share/ca-certificates/9613.pem /etc/ssl/certs/9613.pem"
	I1129 09:30:23.233079   40531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9613.pem
	I1129 09:30:23.238944   40531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:36 /usr/share/ca-certificates/9613.pem
	I1129 09:30:23.239025   40531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9613.pem
	I1129 09:30:23.248878   40531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9613.pem /etc/ssl/certs/51391683.0"
	I1129 09:30:23.263279   40531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96132.pem && ln -fs /usr/share/ca-certificates/96132.pem /etc/ssl/certs/96132.pem"
	I1129 09:30:23.277570   40531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96132.pem
	I1129 09:30:23.283272   40531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:36 /usr/share/ca-certificates/96132.pem
	I1129 09:30:23.283342   40531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96132.pem
	I1129 09:30:23.291006   40531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96132.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:30:23.305541   40531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:30:23.319864   40531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:30:23.325566   40531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:30:23.325640   40531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:30:23.332857   40531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:30:23.347332   40531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:30:23.352681   40531 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:30:23.352755   40531 kubeadm.go:401] StartCluster: {Name:auto-473168 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:auto-473168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.142 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:30:23.352856   40531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:30:23.352923   40531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:30:23.396879   40531 cri.go:89] found id: ""
	I1129 09:30:23.396957   40531 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:30:23.411255   40531 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:30:23.424662   40531 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:30:23.440395   40531 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:30:23.440422   40531 kubeadm.go:158] found existing configuration files:
	
	I1129 09:30:23.440491   40531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:30:23.452770   40531 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:30:23.452858   40531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:30:23.465879   40531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:30:23.477155   40531 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:30:23.477240   40531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:30:23.489617   40531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:30:23.501654   40531 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:30:23.501715   40531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:30:23.514333   40531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:30:23.526636   40531 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:30:23.526697   40531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:30:23.539001   40531 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1129 09:30:23.595012   40531 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:30:23.595086   40531 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:30:23.708384   40531 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:30:23.708566   40531 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:30:23.708735   40531 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:30:23.721738   40531 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:30:23.874886   40531 out.go:252]   - Generating certificates and keys ...
	I1129 09:30:23.875035   40531 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:30:23.875149   40531 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:30:23.875267   40531 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	W1129 09:30:24.145339   40298 pod_ready.go:104] pod "etcd-pause-893760" is not "Ready", error: <nil>
	I1129 09:30:25.775394   40298 pod_ready.go:94] pod "etcd-pause-893760" is "Ready"
	I1129 09:30:25.775427   40298 pod_ready.go:86] duration metric: took 6.006288256s for pod "etcd-pause-893760" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:25.778252   40298 pod_ready.go:83] waiting for pod "kube-apiserver-pause-893760" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:25.782343   40298 pod_ready.go:94] pod "kube-apiserver-pause-893760" is "Ready"
	I1129 09:30:25.782368   40298 pod_ready.go:86] duration metric: took 4.09282ms for pod "kube-apiserver-pause-893760" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:25.785216   40298 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-893760" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:25.789119   40298 pod_ready.go:94] pod "kube-controller-manager-pause-893760" is "Ready"
	I1129 09:30:25.789142   40298 pod_ready.go:86] duration metric: took 3.903593ms for pod "kube-controller-manager-pause-893760" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:25.791277   40298 pod_ready.go:83] waiting for pod "kube-proxy-rzkwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:26.169466   40298 pod_ready.go:94] pod "kube-proxy-rzkwr" is "Ready"
	I1129 09:30:26.169490   40298 pod_ready.go:86] duration metric: took 378.196693ms for pod "kube-proxy-rzkwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:26.370321   40298 pod_ready.go:83] waiting for pod "kube-scheduler-pause-893760" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:26.769472   40298 pod_ready.go:94] pod "kube-scheduler-pause-893760" is "Ready"
	I1129 09:30:26.769508   40298 pod_ready.go:86] duration metric: took 399.151054ms for pod "kube-scheduler-pause-893760" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:26.769526   40298 pod_ready.go:40] duration metric: took 8.02140096s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:30:26.817716   40298 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:30:26.821978   40298 out.go:179] * Done! kubectl is now configured to use "pause-893760" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.540587225Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8cea5c89-c15b-4e96-b4d2-54d53349bab6 name=/runtime.v1.RuntimeService/Version
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.542139577Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f435821-a9ca-44aa-b313-c37327c594e8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.542593518Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764408627542569150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f435821-a9ca-44aa-b313-c37327c594e8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.543528208Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66a43807-b27b-49b0-bab0-5cb767a554f1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.543725286Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66a43807-b27b-49b0-bab0-5cb767a554f1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.544447966Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eb4aed02a347d4f806f74d29f691b160f1752223360e1f4993891bc19937acc9,PodSandboxId:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25bd420f922f63899981a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764408617503397540,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0fdc57-ce2f-483b-82f2-006931b3ab39,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4805659e2d2350aa4b28a3f0a7e9befcdf9d1ce5c46b8a7418eacb37b589daf1,PodSandboxId:fab3926d67f6b2c76c5d114314c72b25a18f547391edfea90b81aa5abd13a417,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764408617512228376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a25a5d8939a18228694eb456392302e9c83463a0275b2753d434deae57f1ee,PodSandboxId:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764408613528235700,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebedaca83ba826a1dbb5a46ab2511030acc3b00245a2abecd907793732b610d2,PodSandboxId:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388
d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764408613501070510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75538cef284310fb254cadee824a4f44de67872163cbf4f332932a451a0b7db7,PodSandboxId:d43d12644b34ecad64ea2f2e8e8879d632abcb58ab983fb1a867bf05a693a240,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764408591221150082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae36582cbc8207da290442472aef7150dd5654da51d2c6bfb156077457c3420e,PodSandboxId:064a34577f14c0558cbe035415c72f0df3d0bd361760c3cc3e7f4548cd8790fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764408591174327569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c8f96dff7d0d88bc3f9e905b659365005dcc3c0ab3a617d5aa75138ca581fd,PodSandboxId:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25b
d420f922f63899981a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_CREATED,CreatedAt:1764408591132570485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0fdc57-ce2f-483b-82f2-006931b3ab39,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77714cab099fbe439b9f36eb17008bc4c718f563945fac16204b748c134957c3,PodSandboxId:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&Cont
ainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1764408591082349561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49386cd4b239787192e49261e28712a3706738c55e7526c54f9bc6f
46fe925b4,PodSandboxId:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1764408591039983083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ef4430e1d2bb8f6efd3aaff4706e7b741d6c4ede2877fa5847dff6b81a716e,PodSandboxId:990af6dc1b865ea31e52bd3b596be9612c1f140ab83c4c2bf9799ccbd542780f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1764408544729560132,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178b3ab1cb251b3a9f7c21cd176343ca8ae0a3af11799761ee56e2de3cedd41b,PodSandboxId:2e12473db9fcd13ed241426d8e2e1e024ca83e026fcef11cde19629fc98fed8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1764408531835129253,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893
760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7555626b5cb53f89c622444b7a65f0d4e5204daa98e629811921ef3bd8259c26,PodSandboxId:607b6ddd8dc665eb03849c32673ba6bfa5f3cf6b26ba656fb823186d5ef39b40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1764408531785179883,Labels:map[string]string
{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66a43807-b27b-49b0-bab0-5cb767a554f1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.585570623Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ae75353-d636-4858-a872-1521d3d30d62 name=/runtime.v1.RuntimeService/Version
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.585657152Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ae75353-d636-4858-a872-1521d3d30d62 name=/runtime.v1.RuntimeService/Version
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.587066615Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c08d8e3-84f5-490c-b236-5d9a99ef85f6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.587654223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764408627587625327,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c08d8e3-84f5-490c-b236-5d9a99ef85f6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.588645234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d722dd28-7cad-48c9-ae08-d89304595860 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.588700536Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d722dd28-7cad-48c9-ae08-d89304595860 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.589029774Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eb4aed02a347d4f806f74d29f691b160f1752223360e1f4993891bc19937acc9,PodSandboxId:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25bd420f922f63899981a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764408617503397540,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0fdc57-ce2f-483b-82f2-006931b3ab39,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4805659e2d2350aa4b28a3f0a7e9befcdf9d1ce5c46b8a7418eacb37b589daf1,PodSandboxId:fab3926d67f6b2c76c5d114314c72b25a18f547391edfea90b81aa5abd13a417,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764408617512228376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a25a5d8939a18228694eb456392302e9c83463a0275b2753d434deae57f1ee,PodSandboxId:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764408613528235700,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebedaca83ba826a1dbb5a46ab2511030acc3b00245a2abecd907793732b610d2,PodSandboxId:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388
d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764408613501070510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75538cef284310fb254cadee824a4f44de67872163cbf4f332932a451a0b7db7,PodSandboxId:d43d12644b34ecad64ea2f2e8e8879d632abcb58ab983fb1a867bf05a693a240,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764408591221150082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae36582cbc8207da290442472aef7150dd5654da51d2c6bfb156077457c3420e,PodSandboxId:064a34577f14c0558cbe035415c72f0df3d0bd361760c3cc3e7f4548cd8790fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764408591174327569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c8f96dff7d0d88bc3f9e905b659365005dcc3c0ab3a617d5aa75138ca581fd,PodSandboxId:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25b
d420f922f63899981a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_CREATED,CreatedAt:1764408591132570485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0fdc57-ce2f-483b-82f2-006931b3ab39,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77714cab099fbe439b9f36eb17008bc4c718f563945fac16204b748c134957c3,PodSandboxId:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&Cont
ainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1764408591082349561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49386cd4b239787192e49261e28712a3706738c55e7526c54f9bc6f
46fe925b4,PodSandboxId:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1764408591039983083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ef4430e1d2bb8f6efd3aaff4706e7b741d6c4ede2877fa5847dff6b81a716e,PodSandboxId:990af6dc1b865ea31e52bd3b596be9612c1f140ab83c4c2bf9799ccbd542780f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1764408544729560132,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178b3ab1cb251b3a9f7c21cd176343ca8ae0a3af11799761ee56e2de3cedd41b,PodSandboxId:2e12473db9fcd13ed241426d8e2e1e024ca83e026fcef11cde19629fc98fed8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1764408531835129253,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893
760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7555626b5cb53f89c622444b7a65f0d4e5204daa98e629811921ef3bd8259c26,PodSandboxId:607b6ddd8dc665eb03849c32673ba6bfa5f3cf6b26ba656fb823186d5ef39b40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1764408531785179883,Labels:map[string]string
{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d722dd28-7cad-48c9-ae08-d89304595860 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.636081043Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d1412752-cf4d-468a-bef2-86c6f56e2630 name=/runtime.v1.RuntimeService/Version
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.636185256Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d1412752-cf4d-468a-bef2-86c6f56e2630 name=/runtime.v1.RuntimeService/Version
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.640190936Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=63999947-3c35-4f9b-a2c6-9e919b32b324 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.640799064Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:fab3926d67f6b2c76c5d114314c72b25a18f547391edfea90b81aa5abd13a417,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-4bmms,Uid:64220006-2ede-426c-bd55-8a0c72981851,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1764408590898568122,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-29T09:29:03.709585255Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d43d12644b34ecad64ea2f2e8e8879d632abcb58ab983fb1a867bf05a693a240,Metadata:&PodSandboxMetadata{Name:etcd-pause-893760,Uid:bc02c2dd86763f8a7654c214d1aca4ab,Namespace:kube-system,Attempt:1,
},State:SANDBOX_READY,CreatedAt:1764408590649563183,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.104:2379,kubernetes.io/config.hash: bc02c2dd86763f8a7654c214d1aca4ab,kubernetes.io/config.seen: 2025-11-29T09:28:58.168459257Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25bd420f922f63899981a,Metadata:&PodSandboxMetadata{Name:kube-proxy-rzkwr,Uid:8d0fdc57-ce2f-483b-82f2-006931b3ab39,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1764408590635829787,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8d0fdc57-ce2f-483b-82f2-006931b3ab39,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-29T09:29:03.401241389Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-893760,Uid:d892aedcec9d261d3ce63d1f2447563a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1764408590616365934,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d892aedcec9d261d3ce63d1f2447563a,kubernetes.io/config.seen: 2025-11-29T09:28:58.168463977Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:064a34577f14c0558cbe035415c72f0df
3d0bd361760c3cc3e7f4548cd8790fa,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-893760,Uid:2bd7c40ab743b39365a90b8ce5ed742b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1764408590602115173,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2bd7c40ab743b39365a90b8ce5ed742b,kubernetes.io/config.seen: 2025-11-29T09:28:58.168464739Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-893760,Uid:c2bd77e32b976ddeeaa2821ad1581a49,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1764408590596665504,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.
kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.104:8443,kubernetes.io/config.hash: c2bd77e32b976ddeeaa2821ad1581a49,kubernetes.io/config.seen: 2025-11-29T09:28:58.168462711Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d4e719b339e025b106760bb57babb3db75593e5b0c574d56a2ffc000130f867a,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-2csdv,Uid:3eafa4ea-e1d3-4729-9d3e-bbe4126f722a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1764408544158732741,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-2csdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eafa4ea-e1d3-4729-9d3e-bbe4126f722a,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io
/config.seen: 2025-11-29T09:29:03.767155951Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:990af6dc1b865ea31e52bd3b596be9612c1f140ab83c4c2bf9799ccbd542780f,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-4bmms,Uid:64220006-2ede-426c-bd55-8a0c72981851,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1764408544070846503,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-29T09:29:03.709585255Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0a76eecf781827f45ca892334890ecaffa24687e4dc8dc485a5a3d4f5384668e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-893760,Uid:c2bd77e32b976ddeeaa2821ad1581a49,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1764408531589
016209,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.104:8443,kubernetes.io/config.hash: c2bd77e32b976ddeeaa2821ad1581a49,kubernetes.io/config.seen: 2025-11-29T09:28:51.005689421Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2e12473db9fcd13ed241426d8e2e1e024ca83e026fcef11cde19629fc98fed8f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-893760,Uid:2bd7c40ab743b39365a90b8ce5ed742b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1764408531581977060,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40
ab743b39365a90b8ce5ed742b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2bd7c40ab743b39365a90b8ce5ed742b,kubernetes.io/config.seen: 2025-11-29T09:28:51.005691426Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:844655aa22d7230d668dcf8a3f479e78fa51d72ee680126341a834b774ca19ca,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-893760,Uid:d892aedcec9d261d3ce63d1f2447563a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1764408531575631980,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d892aedcec9d261d3ce63d1f2447563a,kubernetes.io/config.seen: 2025-11-29T09:28:51.005690609Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:607b6ddd
8dc665eb03849c32673ba6bfa5f3cf6b26ba656fb823186d5ef39b40,Metadata:&PodSandboxMetadata{Name:etcd-pause-893760,Uid:bc02c2dd86763f8a7654c214d1aca4ab,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1764408531562181668,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.104:2379,kubernetes.io/config.hash: bc02c2dd86763f8a7654c214d1aca4ab,kubernetes.io/config.seen: 2025-11-29T09:28:51.005685346Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=63999947-3c35-4f9b-a2c6-9e919b32b324 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.642406492Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5d889b55-0773-466f-8b45-97f5efcffa0f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.643227347Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764408627643188481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5d889b55-0773-466f-8b45-97f5efcffa0f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.643533909Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7383bae-5d29-4770-9867-ebb2125cec1f name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.643958446Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7383bae-5d29-4770-9867-ebb2125cec1f name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.645453795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eb4aed02a347d4f806f74d29f691b160f1752223360e1f4993891bc19937acc9,PodSandboxId:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25bd420f922f63899981a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764408617503397540,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0fdc57-ce2f-483b-82f2-006931b3ab39,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4805659e2d2350aa4b28a3f0a7e9befcdf9d1ce5c46b8a7418eacb37b589daf1,PodSandboxId:fab3926d67f6b2c76c5d114314c72b25a18f547391edfea90b81aa5abd13a417,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764408617512228376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a25a5d8939a18228694eb456392302e9c83463a0275b2753d434deae57f1ee,PodSandboxId:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764408613528235700,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebedaca83ba826a1dbb5a46ab2511030acc3b00245a2abecd907793732b610d2,PodSandboxId:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388
d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764408613501070510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75538cef284310fb254cadee824a4f44de67872163cbf4f332932a451a0b7db7,PodSandboxId:d43d12644b34ecad64ea2f2e8e8879d632abcb58ab983fb1a867bf05a693a240,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764408591221150082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae36582cbc8207da290442472aef7150dd5654da51d2c6bfb156077457c3420e,PodSandboxId:064a34577f14c0558cbe035415c72f0df3d0bd361760c3cc3e7f4548cd8790fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764408591174327569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c8f96dff7d0d88bc3f9e905b659365005dcc3c0ab3a617d5aa75138ca581fd,PodSandboxId:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25b
d420f922f63899981a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_CREATED,CreatedAt:1764408591132570485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0fdc57-ce2f-483b-82f2-006931b3ab39,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77714cab099fbe439b9f36eb17008bc4c718f563945fac16204b748c134957c3,PodSandboxId:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&Cont
ainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1764408591082349561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49386cd4b239787192e49261e28712a3706738c55e7526c54f9bc6f
46fe925b4,PodSandboxId:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1764408591039983083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ef4430e1d2bb8f6efd3aaff4706e7b741d6c4ede2877fa5847dff6b81a716e,PodSandboxId:990af6dc1b865ea31e52bd3b596be9612c1f140ab83c4c2bf9799ccbd542780f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1764408544729560132,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178b3ab1cb251b3a9f7c21cd176343ca8ae0a3af11799761ee56e2de3cedd41b,PodSandboxId:2e12473db9fcd13ed241426d8e2e1e024ca83e026fcef11cde19629fc98fed8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1764408531835129253,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893
760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7555626b5cb53f89c622444b7a65f0d4e5204daa98e629811921ef3bd8259c26,PodSandboxId:607b6ddd8dc665eb03849c32673ba6bfa5f3cf6b26ba656fb823186d5ef39b40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1764408531785179883,Labels:map[string]string
{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7383bae-5d29-4770-9867-ebb2125cec1f name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.647801924Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1fe46097-6f87-404d-88ab-bde8b583071d name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.647887768Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1fe46097-6f87-404d-88ab-bde8b583071d name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:27 pause-893760 crio[2792]: time="2025-11-29 09:30:27.648300382Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eb4aed02a347d4f806f74d29f691b160f1752223360e1f4993891bc19937acc9,PodSandboxId:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25bd420f922f63899981a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764408617503397540,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0fdc57-ce2f-483b-82f2-006931b3ab39,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4805659e2d2350aa4b28a3f0a7e9befcdf9d1ce5c46b8a7418eacb37b589daf1,PodSandboxId:fab3926d67f6b2c76c5d114314c72b25a18f547391edfea90b81aa5abd13a417,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764408617512228376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a25a5d8939a18228694eb456392302e9c83463a0275b2753d434deae57f1ee,PodSandboxId:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764408613528235700,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebedaca83ba826a1dbb5a46ab2511030acc3b00245a2abecd907793732b610d2,PodSandboxId:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388
d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764408613501070510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75538cef284310fb254cadee824a4f44de67872163cbf4f332932a451a0b7db7,PodSandboxId:d43d12644b34ecad64ea2f2e8e8879d632abcb58ab983fb1a867bf05a693a240,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764408591221150082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae36582cbc8207da290442472aef7150dd5654da51d2c6bfb156077457c3420e,PodSandboxId:064a34577f14c0558cbe035415c72f0df3d0bd361760c3cc3e7f4548cd8790fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764408591174327569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c8f96dff7d0d88bc3f9e905b659365005dcc3c0ab3a617d5aa75138ca581fd,PodSandboxId:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25b
d420f922f63899981a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_CREATED,CreatedAt:1764408591132570485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0fdc57-ce2f-483b-82f2-006931b3ab39,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77714cab099fbe439b9f36eb17008bc4c718f563945fac16204b748c134957c3,PodSandboxId:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&Cont
ainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1764408591082349561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49386cd4b239787192e49261e28712a3706738c55e7526c54f9bc6f
46fe925b4,PodSandboxId:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1764408591039983083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ef4430e1d2bb8f6efd3aaff4706e7b741d6c4ede2877fa5847dff6b81a716e,PodSandboxId:990af6dc1b865ea31e52bd3b596be9612c1f140ab83c4c2bf9799ccbd542780f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1764408544729560132,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178b3ab1cb251b3a9f7c21cd176343ca8ae0a3af11799761ee56e2de3cedd41b,PodSandboxId:2e12473db9fcd13ed241426d8e2e1e024ca83e026fcef11cde19629fc98fed8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1764408531835129253,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893
760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7555626b5cb53f89c622444b7a65f0d4e5204daa98e629811921ef3bd8259c26,PodSandboxId:607b6ddd8dc665eb03849c32673ba6bfa5f3cf6b26ba656fb823186d5ef39b40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1764408531785179883,Labels:map[string]string
{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1fe46097-6f87-404d-88ab-bde8b583071d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	4805659e2d235       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   10 seconds ago       Running             coredns                   1                   fab3926d67f6b       coredns-66bc5c9577-4bmms               kube-system
	eb4aed02a347d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   10 seconds ago       Running             kube-proxy                2                   0d8ee0e904573       kube-proxy-rzkwr                       kube-system
	99a25a5d8939a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   14 seconds ago       Running             kube-apiserver            2                   acd5b0858b31d       kube-apiserver-pause-893760            kube-system
	ebedaca83ba82       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   14 seconds ago       Running             kube-controller-manager   2                   fb1043e8abc7a       kube-controller-manager-pause-893760   kube-system
	75538cef28431       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   36 seconds ago       Running             etcd                      1                   d43d12644b34e       etcd-pause-893760                      kube-system
	ae36582cbc820       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   36 seconds ago       Running             kube-scheduler            1                   064a34577f14c       kube-scheduler-pause-893760            kube-system
	d8c8f96dff7d0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   36 seconds ago       Created             kube-proxy                1                   0d8ee0e904573       kube-proxy-rzkwr                       kube-system
	77714cab099fb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   36 seconds ago       Exited              kube-apiserver            1                   acd5b0858b31d       kube-apiserver-pause-893760            kube-system
	49386cd4b2397       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   36 seconds ago       Exited              kube-controller-manager   1                   fb1043e8abc7a       kube-controller-manager-pause-893760   kube-system
	c6ef4430e1d2b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   990af6dc1b865       coredns-66bc5c9577-4bmms               kube-system
	178b3ab1cb251       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Exited              kube-scheduler            0                   2e12473db9fcd       kube-scheduler-pause-893760            kube-system
	7555626b5cb53       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Exited              etcd                      0                   607b6ddd8dc66       etcd-pause-893760                      kube-system
	
	
	==> coredns [4805659e2d2350aa4b28a3f0a7e9befcdf9d1ce5c46b8a7418eacb37b589daf1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46055 - 13616 "HINFO IN 7613232645828771212.3063199101481583223. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026159738s
	
	
	==> coredns [c6ef4430e1d2bb8f6efd3aaff4706e7b741d6c4ede2877fa5847dff6b81a716e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-893760
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-893760
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=pause-893760
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_28_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:28:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-893760
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:30:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:30:16 +0000   Sat, 29 Nov 2025 09:28:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:30:16 +0000   Sat, 29 Nov 2025 09:28:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:30:16 +0000   Sat, 29 Nov 2025 09:28:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:30:16 +0000   Sat, 29 Nov 2025 09:28:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.104
	  Hostname:    pause-893760
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c767e78f6e34960a0105830388bba46
	  System UUID:                3c767e78-f6e3-4960-a010-5830388bba46
	  Boot ID:                    2efdb47d-abc8-4960-9699-39eef6f06aa6
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4bmms                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     84s
	  kube-system                 etcd-pause-893760                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         91s
	  kube-system                 kube-apiserver-pause-893760             250m (12%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-controller-manager-pause-893760    200m (10%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-rzkwr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-scheduler-pause-893760             100m (5%)     0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 82s                kube-proxy       
	  Normal  Starting                 9s                 kube-proxy       
	  Normal  NodeHasSufficientPID     89s                kubelet          Node pause-893760 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  89s                kubelet          Node pause-893760 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s                kubelet          Node pause-893760 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  NodeReady                88s                kubelet          Node pause-893760 status is now: NodeReady
	  Normal  RegisteredNode           85s                node-controller  Node pause-893760 event: Registered Node pause-893760 in Controller
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node pause-893760 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node pause-893760 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x7 over 34s)  kubelet          Node pause-893760 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                 node-controller  Node pause-893760 event: Registered Node pause-893760 in Controller
	
	
	==> dmesg <==
	[Nov29 09:28] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001360] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005665] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.193438] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.113787] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.122032] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.112292] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.139725] kauditd_printk_skb: 171 callbacks suppressed
	[Nov29 09:29] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.823808] kauditd_printk_skb: 219 callbacks suppressed
	[ +21.302347] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.174518] kauditd_printk_skb: 304 callbacks suppressed
	[Nov29 09:30] kauditd_printk_skb: 14 callbacks suppressed
	[  +4.007004] kauditd_printk_skb: 22 callbacks suppressed
	
	
	==> etcd [75538cef284310fb254cadee824a4f44de67872163cbf4f332932a451a0b7db7] <==
	{"level":"warn","ts":"2025-11-29T09:30:24.132536Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"397.83603ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6405696258632730950 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-893760\" mod_revision:416 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-893760\" value_size:6749 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-893760\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-29T09:30:24.132594Z","caller":"traceutil/trace.go:172","msg":"trace[585642599] linearizableReadLoop","detail":"{readStateIndex:524; appliedIndex:523; }","duration":"365.15373ms","start":"2025-11-29T09:30:23.767432Z","end":"2025-11-29T09:30:24.132586Z","steps":["trace[585642599] 'read index received'  (duration: 73.203µs)","trace[585642599] 'applied index is now lower than readState.Index'  (duration: 365.079698ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-29T09:30:24.132877Z","caller":"traceutil/trace.go:172","msg":"trace[107119085] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"740.239157ms","start":"2025-11-29T09:30:23.392624Z","end":"2025-11-29T09:30:24.132863Z","steps":["trace[107119085] 'process raft request'  (duration: 341.476448ms)","trace[107119085] 'compare'  (duration: 397.265626ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:30:24.132976Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-29T09:30:23.392604Z","time spent":"740.335843ms","remote":"127.0.0.1:40620","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6820,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-893760\" mod_revision:416 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-893760\" value_size:6749 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-893760\" > >"}
	{"level":"warn","ts":"2025-11-29T09:30:24.133181Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"365.758937ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-893760\" limit:1 ","response":"range_response_count:1 size:6082"}
	{"level":"info","ts":"2025-11-29T09:30:24.133208Z","caller":"traceutil/trace.go:172","msg":"trace[120927304] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-893760; range_end:; response_count:1; response_revision:482; }","duration":"365.786141ms","start":"2025-11-29T09:30:23.767414Z","end":"2025-11-29T09:30:24.133201Z","steps":["trace[120927304] 'agreement among raft nodes before linearized reading'  (duration: 365.692662ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:30:24.133227Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-29T09:30:23.767397Z","time spent":"365.824791ms","remote":"127.0.0.1:40620","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":6104,"request content":"key:\"/registry/pods/kube-system/etcd-pause-893760\" limit:1 "}
	{"level":"info","ts":"2025-11-29T09:30:24.426362Z","caller":"traceutil/trace.go:172","msg":"trace[312180602] linearizableReadLoop","detail":"{readStateIndex:524; appliedIndex:524; }","duration":"158.558907ms","start":"2025-11-29T09:30:24.267785Z","end":"2025-11-29T09:30:24.426344Z","steps":["trace[312180602] 'read index received'  (duration: 158.55269ms)","trace[312180602] 'applied index is now lower than readState.Index'  (duration: 5.359µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:30:24.959195Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"340.382926ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-29T09:30:24.959243Z","caller":"traceutil/trace.go:172","msg":"trace[1327476127] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:482; }","duration":"340.441519ms","start":"2025-11-29T09:30:24.618793Z","end":"2025-11-29T09:30:24.959234Z","steps":["trace[1327476127] 'range keys from in-memory index tree'  (duration: 340.355347ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:30:24.959408Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"691.625373ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-893760\" limit:1 ","response":"range_response_count:1 size:6082"}
	{"level":"info","ts":"2025-11-29T09:30:24.959443Z","caller":"traceutil/trace.go:172","msg":"trace[1966405968] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-893760; range_end:; response_count:1; response_revision:482; }","duration":"691.678111ms","start":"2025-11-29T09:30:24.267756Z","end":"2025-11-29T09:30:24.959434Z","steps":["trace[1966405968] 'agreement among raft nodes before linearized reading'  (duration: 158.684693ms)","trace[1966405968] 'range keys from in-memory index tree'  (duration: 532.879615ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:30:24.959464Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-29T09:30:24.267735Z","time spent":"691.723669ms","remote":"127.0.0.1:40620","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":6104,"request content":"key:\"/registry/pods/kube-system/etcd-pause-893760\" limit:1 "}
	{"level":"warn","ts":"2025-11-29T09:30:24.959466Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"533.046668ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6405696258632730956 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-893760\" mod_revision:417 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-893760\" value_size:4969 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-893760\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-29T09:30:24.959500Z","caller":"traceutil/trace.go:172","msg":"trace[1101987858] linearizableReadLoop","detail":"{readStateIndex:525; appliedIndex:524; }","duration":"333.665387ms","start":"2025-11-29T09:30:24.625829Z","end":"2025-11-29T09:30:24.959494Z","steps":["trace[1101987858] 'read index received'  (duration: 25.678µs)","trace[1101987858] 'applied index is now lower than readState.Index'  (duration: 333.639267ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:30:24.959664Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"333.834843ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-29T09:30:24.959681Z","caller":"traceutil/trace.go:172","msg":"trace[1440632774] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:483; }","duration":"333.852272ms","start":"2025-11-29T09:30:24.625824Z","end":"2025-11-29T09:30:24.959677Z","steps":["trace[1440632774] 'agreement among raft nodes before linearized reading'  (duration: 333.807987ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:30:24.959694Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-29T09:30:24.625810Z","time spent":"333.881665ms","remote":"127.0.0.1:40248","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-11-29T09:30:24.959772Z","caller":"traceutil/trace.go:172","msg":"trace[1550989086] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"813.391945ms","start":"2025-11-29T09:30:24.146374Z","end":"2025-11-29T09:30:24.959766Z","steps":["trace[1550989086] 'process raft request'  (duration: 280.005054ms)","trace[1550989086] 'compare'  (duration: 532.736135ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:30:24.959811Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-29T09:30:24.146356Z","time spent":"813.42817ms","remote":"127.0.0.1:40620","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5031,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-893760\" mod_revision:417 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-893760\" value_size:4969 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-893760\" > >"}
	{"level":"info","ts":"2025-11-29T09:30:25.409127Z","caller":"traceutil/trace.go:172","msg":"trace[702269983] linearizableReadLoop","detail":"{readStateIndex:525; appliedIndex:525; }","duration":"141.637393ms","start":"2025-11-29T09:30:25.267467Z","end":"2025-11-29T09:30:25.409104Z","steps":["trace[702269983] 'read index received'  (duration: 141.625958ms)","trace[702269983] 'applied index is now lower than readState.Index'  (duration: 5.245µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:30:25.417730Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.250756ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-893760\" limit:1 ","response":"range_response_count:1 size:6082"}
	{"level":"info","ts":"2025-11-29T09:30:25.417788Z","caller":"traceutil/trace.go:172","msg":"trace[1064063686] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-893760; range_end:; response_count:1; response_revision:483; }","duration":"150.316242ms","start":"2025-11-29T09:30:25.267462Z","end":"2025-11-29T09:30:25.417778Z","steps":["trace[1064063686] 'agreement among raft nodes before linearized reading'  (duration: 141.880451ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:30:25.417857Z","caller":"traceutil/trace.go:172","msg":"trace[862626125] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"446.741238ms","start":"2025-11-29T09:30:24.971103Z","end":"2025-11-29T09:30:25.417844Z","steps":["trace[862626125] 'process raft request'  (duration: 438.362822ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:30:25.417961Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-29T09:30:24.971091Z","time spent":"446.793691ms","remote":"127.0.0.1:40620","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7227,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-893760\" mod_revision:414 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-893760\" value_size:7165 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-893760\" > >"}
	
	
	==> etcd [7555626b5cb53f89c622444b7a65f0d4e5204daa98e629811921ef3bd8259c26] <==
	{"level":"warn","ts":"2025-11-29T09:28:54.496392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:28:54.507372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:28:54.519577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:28:54.535061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:28:54.546669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:28:54.558000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:28:54.652119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46168","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-29T09:29:41.651238Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-29T09:29:41.652070Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-893760","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.104:2380"],"advertise-client-urls":["https://192.168.83.104:2379"]}
	{"level":"error","ts":"2025-11-29T09:29:41.652374Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-29T09:29:41.736604Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-29T09:29:41.736667Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T09:29:41.736688Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"2a0c3b01d1d858e5","current-leader-member-id":"2a0c3b01d1d858e5"}
	{"level":"info","ts":"2025-11-29T09:29:41.736730Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-29T09:29:41.736807Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-29T09:29:41.736798Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-29T09:29:41.736847Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-29T09:29:41.736854Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-29T09:29:41.736895Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.104:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-29T09:29:41.736902Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.104:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-29T09:29:41.736908Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.104:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T09:29:41.740909Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.83.104:2380"}
	{"level":"error","ts":"2025-11-29T09:29:41.741021Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.104:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T09:29:41.741066Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.83.104:2380"}
	{"level":"info","ts":"2025-11-29T09:29:41.741080Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-893760","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.104:2380"],"advertise-client-urls":["https://192.168.83.104:2379"]}
	
	
	==> kernel <==
	 09:30:28 up 2 min,  0 users,  load average: 1.24, 0.43, 0.16
	Linux pause-893760 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [77714cab099fbe439b9f36eb17008bc4c718f563945fac16204b748c134957c3] <==
	W1129 09:29:52.160388       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:52.160545       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1129 09:29:52.164329       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1129 09:29:52.178482       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1129 09:29:52.189570       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1129 09:29:52.191403       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1129 09:29:52.191649       1 instance.go:239] Using reconciler: lease
	W1129 09:29:52.193082       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:52.193247       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:53.161844       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:53.161859       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:53.194552       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:54.564684       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:54.759118       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:54.975985       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:56.883590       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:57.114998       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:57.723215       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:30:00.394136       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:30:01.765954       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:30:01.962849       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:30:07.055850       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:30:07.224432       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:30:08.724474       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1129 09:30:12.192621       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [99a25a5d8939a18228694eb456392302e9c83463a0275b2753d434deae57f1ee] <==
	I1129 09:30:16.152761       1 policy_source.go:240] refreshing policies
	I1129 09:30:16.159205       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 09:30:16.159574       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1129 09:30:16.180370       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:30:16.186875       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 09:30:16.192984       1 aggregator.go:171] initial CRD sync complete...
	I1129 09:30:16.193004       1 autoregister_controller.go:144] Starting autoregister controller
	I1129 09:30:16.193010       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 09:30:16.193017       1 cache.go:39] Caches are synced for autoregister controller
	I1129 09:30:16.194395       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:30:16.194445       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1129 09:30:16.194500       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1129 09:30:16.194592       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1129 09:30:16.194620       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1129 09:30:16.215874       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1129 09:30:16.226355       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 09:30:17.047435       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:30:17.227324       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:30:18.141683       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:30:18.234720       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:30:18.287642       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:30:18.300430       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:30:19.667778       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:30:19.720480       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:30:19.865022       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [49386cd4b239787192e49261e28712a3706738c55e7526c54f9bc6f46fe925b4] <==
	I1129 09:29:52.384815       1 serving.go:386] Generated self-signed cert in-memory
	I1129 09:29:52.609904       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1129 09:29:52.609931       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:29:52.611778       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1129 09:29:52.611916       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1129 09:29:52.612642       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1129 09:29:52.613296       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1129 09:30:13.201460       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.83.104:8443/healthz\": dial tcp 192.168.83.104:8443: connect: connection refused"
	
	
	==> kube-controller-manager [ebedaca83ba826a1dbb5a46ab2511030acc3b00245a2abecd907793732b610d2] <==
	I1129 09:30:19.560237       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 09:30:19.560557       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:30:19.560673       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 09:30:19.560791       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 09:30:19.560865       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-893760"
	I1129 09:30:19.560914       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1129 09:30:19.561642       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1129 09:30:19.561801       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:30:19.562914       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 09:30:19.563002       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1129 09:30:19.563043       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 09:30:19.563086       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 09:30:19.564461       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:30:19.564773       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 09:30:19.566487       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:30:19.572670       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:30:19.577994       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:30:19.579986       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 09:30:19.589584       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:30:19.589626       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 09:30:19.589637       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 09:30:19.596977       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 09:30:19.596982       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 09:30:19.601315       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 09:30:19.869608       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [d8c8f96dff7d0d88bc3f9e905b659365005dcc3c0ab3a617d5aa75138ca581fd] <==
	
	
	==> kube-proxy [eb4aed02a347d4f806f74d29f691b160f1752223360e1f4993891bc19937acc9] <==
	I1129 09:30:17.764211       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:30:17.864664       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:30:17.865369       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.104"]
	E1129 09:30:17.865493       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:30:17.915252       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1129 09:30:17.915376       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1129 09:30:17.915404       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:30:17.936673       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:30:17.937397       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:30:17.937434       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:30:17.946904       1 config.go:200] "Starting service config controller"
	I1129 09:30:17.946965       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:30:17.947000       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:30:17.947007       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:30:17.947023       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:30:17.947029       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:30:17.949252       1 config.go:309] "Starting node config controller"
	I1129 09:30:17.950424       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:30:17.950494       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:30:18.047165       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:30:18.047228       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:30:18.047334       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [178b3ab1cb251b3a9f7c21cd176343ca8ae0a3af11799761ee56e2de3cedd41b] <==
	I1129 09:28:55.991245       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1129 09:28:56.000587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1129 09:28:56.001026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:28:56.001083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:28:56.001207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:28:56.001422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:28:56.001435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:28:56.001529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:28:56.001630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:28:56.001662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:28:56.001856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:28:56.001951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:28:56.002088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 09:28:56.001959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:28:56.002430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:28:56.002460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:28:56.002578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:28:56.002224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:28:56.002638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:28:56.002663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1129 09:28:57.591662       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:29:41.655626       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1129 09:29:41.658484       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1129 09:29:41.663585       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1129 09:29:41.663626       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ae36582cbc8207da290442472aef7150dd5654da51d2c6bfb156077457c3420e] <==
	I1129 09:30:14.876137       1 serving.go:386] Generated self-signed cert in-memory
	W1129 09:30:16.114483       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 09:30:16.114521       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 09:30:16.114530       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 09:30:16.114536       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 09:30:16.191418       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 09:30:16.191466       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:30:16.207323       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:30:16.207371       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:30:16.207907       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:30:16.208022       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 09:30:16.308357       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:30:15 pause-893760 kubelet[3612]: E1129 09:30:15.522516    3612 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-893760\" not found" node="pause-893760"
	Nov 29 09:30:15 pause-893760 kubelet[3612]: E1129 09:30:15.523338    3612 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-893760\" not found" node="pause-893760"
	Nov 29 09:30:15 pause-893760 kubelet[3612]: E1129 09:30:15.523762    3612 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-893760\" not found" node="pause-893760"
	Nov 29 09:30:15 pause-893760 kubelet[3612]: E1129 09:30:15.524201    3612 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-893760\" not found" node="pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.196394    3612 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.236072    3612 kubelet_node_status.go:124] "Node was previously registered" node="pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.236339    3612 kubelet_node_status.go:78] "Successfully registered node" node="pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.236406    3612 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.241248    3612 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: E1129 09:30:16.252900    3612 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-893760\" already exists" pod="kube-system/etcd-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.252969    3612 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: E1129 09:30:16.267082    3612 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-893760\" already exists" pod="kube-system/kube-apiserver-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.267126    3612 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: E1129 09:30:16.283098    3612 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-893760\" already exists" pod="kube-system/kube-controller-manager-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.283145    3612 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: E1129 09:30:16.293501    3612 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-893760\" already exists" pod="kube-system/kube-scheduler-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.524004    3612 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: E1129 09:30:16.539688    3612 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-893760\" already exists" pod="kube-system/kube-apiserver-pause-893760"
	Nov 29 09:30:17 pause-893760 kubelet[3612]: I1129 09:30:17.179134    3612 apiserver.go:52] "Watching apiserver"
	Nov 29 09:30:17 pause-893760 kubelet[3612]: I1129 09:30:17.195458    3612 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 29 09:30:17 pause-893760 kubelet[3612]: I1129 09:30:17.221029    3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d0fdc57-ce2f-483b-82f2-006931b3ab39-xtables-lock\") pod \"kube-proxy-rzkwr\" (UID: \"8d0fdc57-ce2f-483b-82f2-006931b3ab39\") " pod="kube-system/kube-proxy-rzkwr"
	Nov 29 09:30:17 pause-893760 kubelet[3612]: I1129 09:30:17.221092    3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d0fdc57-ce2f-483b-82f2-006931b3ab39-lib-modules\") pod \"kube-proxy-rzkwr\" (UID: \"8d0fdc57-ce2f-483b-82f2-006931b3ab39\") " pod="kube-system/kube-proxy-rzkwr"
	Nov 29 09:30:17 pause-893760 kubelet[3612]: I1129 09:30:17.484635    3612 scope.go:117] "RemoveContainer" containerID="d8c8f96dff7d0d88bc3f9e905b659365005dcc3c0ab3a617d5aa75138ca581fd"
	Nov 29 09:30:23 pause-893760 kubelet[3612]: E1129 09:30:23.382787    3612 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764408623382011531  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 29 09:30:23 pause-893760 kubelet[3612]: E1129 09:30:23.382930    3612 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764408623382011531  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-893760 -n pause-893760
helpers_test.go:269: (dbg) Run:  kubectl --context pause-893760 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-893760 -n pause-893760
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-893760 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-893760 logs -n 25: (1.430364821s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p guest-872325 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                 │ guest-872325              │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │ 29 Nov 25 09:26 UTC │
	│ ssh     │ -p NoKubernetes-371904 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-371904       │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │                     │
	│ stop    │ -p NoKubernetes-371904                                                                                                                                                                                                  │ NoKubernetes-371904       │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │ 29 Nov 25 09:26 UTC │
	│ start   │ -p NoKubernetes-371904 --driver=kvm2  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-371904       │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │ 29 Nov 25 09:26 UTC │
	│ delete  │ -p kubernetes-upgrade-553896                                                                                                                                                                                            │ kubernetes-upgrade-553896 │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │ 29 Nov 25 09:26 UTC │
	│ start   │ -p force-systemd-env-743631 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                                │ force-systemd-env-743631  │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │ 29 Nov 25 09:27 UTC │
	│ start   │ -p force-systemd-flag-325714 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                               │ force-systemd-flag-325714 │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │ 29 Nov 25 09:27 UTC │
	│ ssh     │ -p NoKubernetes-371904 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-371904       │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │                     │
	│ delete  │ -p NoKubernetes-371904                                                                                                                                                                                                  │ NoKubernetes-371904       │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │ 29 Nov 25 09:26 UTC │
	│ start   │ -p cert-expiration-369885 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                    │ cert-expiration-369885    │ jenkins │ v1.37.0 │ 29 Nov 25 09:26 UTC │ 29 Nov 25 09:28 UTC │
	│ delete  │ -p force-systemd-env-743631                                                                                                                                                                                             │ force-systemd-env-743631  │ jenkins │ v1.37.0 │ 29 Nov 25 09:27 UTC │ 29 Nov 25 09:27 UTC │
	│ start   │ -p cert-options-648964 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-648964       │ jenkins │ v1.37.0 │ 29 Nov 25 09:27 UTC │ 29 Nov 25 09:28 UTC │
	│ ssh     │ force-systemd-flag-325714 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                    │ force-systemd-flag-325714 │ jenkins │ v1.37.0 │ 29 Nov 25 09:27 UTC │ 29 Nov 25 09:27 UTC │
	│ delete  │ -p force-systemd-flag-325714                                                                                                                                                                                            │ force-systemd-flag-325714 │ jenkins │ v1.37.0 │ 29 Nov 25 09:27 UTC │ 29 Nov 25 09:27 UTC │
	│ start   │ -p pause-893760 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-893760              │ jenkins │ v1.37.0 │ 29 Nov 25 09:27 UTC │ 29 Nov 25 09:29 UTC │
	│ ssh     │ cert-options-648964 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-648964       │ jenkins │ v1.37.0 │ 29 Nov 25 09:28 UTC │ 29 Nov 25 09:28 UTC │
	│ ssh     │ -p cert-options-648964 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-648964       │ jenkins │ v1.37.0 │ 29 Nov 25 09:28 UTC │ 29 Nov 25 09:28 UTC │
	│ delete  │ -p cert-options-648964                                                                                                                                                                                                  │ cert-options-648964       │ jenkins │ v1.37.0 │ 29 Nov 25 09:28 UTC │ 29 Nov 25 09:28 UTC │
	│ start   │ -p stopped-upgrade-044628 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-044628    │ jenkins │ v1.35.0 │ 29 Nov 25 09:28 UTC │ 29 Nov 25 09:29 UTC │
	│ stop    │ stopped-upgrade-044628 stop                                                                                                                                                                                             │ stopped-upgrade-044628    │ jenkins │ v1.35.0 │ 29 Nov 25 09:29 UTC │ 29 Nov 25 09:29 UTC │
	│ start   │ -p stopped-upgrade-044628 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                  │ stopped-upgrade-044628    │ jenkins │ v1.37.0 │ 29 Nov 25 09:29 UTC │ 29 Nov 25 09:29 UTC │
	│ start   │ -p pause-893760 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-893760              │ jenkins │ v1.37.0 │ 29 Nov 25 09:29 UTC │ 29 Nov 25 09:30 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-044628 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ stopped-upgrade-044628    │ jenkins │ v1.37.0 │ 29 Nov 25 09:29 UTC │                     │
	│ delete  │ -p stopped-upgrade-044628                                                                                                                                                                                               │ stopped-upgrade-044628    │ jenkins │ v1.37.0 │ 29 Nov 25 09:29 UTC │ 29 Nov 25 09:29 UTC │
	│ start   │ -p auto-473168 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                                                                                   │ auto-473168               │ jenkins │ v1.37.0 │ 29 Nov 25 09:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 09:29:59
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 09:29:59.022401   40531 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:29:59.022676   40531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:29:59.022685   40531 out.go:374] Setting ErrFile to fd 2...
	I1129 09:29:59.022689   40531 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:29:59.022912   40531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
	I1129 09:29:59.023389   40531 out.go:368] Setting JSON to false
	I1129 09:29:59.024282   40531 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4343,"bootTime":1764404256,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:29:59.024341   40531 start.go:143] virtualization: kvm guest
	I1129 09:29:59.026786   40531 out.go:179] * [auto-473168] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:29:59.028641   40531 notify.go:221] Checking for updates...
	I1129 09:29:59.028680   40531 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:29:59.030442   40531 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:29:59.031919   40531 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig
	I1129 09:29:59.033301   40531 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube
	I1129 09:29:59.034543   40531 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:29:59.035951   40531 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:29:59.037662   40531 config.go:182] Loaded profile config "cert-expiration-369885": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:29:59.037738   40531 config.go:182] Loaded profile config "guest-872325": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1129 09:29:59.037863   40531 config.go:182] Loaded profile config "pause-893760": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:29:59.037948   40531 config.go:182] Loaded profile config "running-upgrade-501515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1129 09:29:59.038039   40531 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:29:59.077526   40531 out.go:179] * Using the kvm2 driver based on user configuration
	I1129 09:29:59.078926   40531 start.go:309] selected driver: kvm2
	I1129 09:29:59.078940   40531 start.go:927] validating driver "kvm2" against <nil>
	I1129 09:29:59.078950   40531 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:29:59.079625   40531 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 09:29:59.079869   40531 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:29:59.079897   40531 cni.go:84] Creating CNI manager for ""
	I1129 09:29:59.079937   40531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1129 09:29:59.079945   40531 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1129 09:29:59.079982   40531 start.go:353] cluster config:
	{Name:auto-473168 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-473168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Auto
PauseInterval:1m0s}
	I1129 09:29:59.080074   40531 iso.go:125] acquiring lock: {Name:mk0184b92a126aea44cd27d4836c247b817b0333 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 09:29:59.081496   40531 out.go:179] * Starting "auto-473168" primary control-plane node in "auto-473168" cluster
	I1129 09:29:59.082592   40531 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:29:59.082619   40531 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 09:29:59.082625   40531 cache.go:65] Caching tarball of preloaded images
	I1129 09:29:59.082708   40531 preload.go:238] Found /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1129 09:29:59.082719   40531 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1129 09:29:59.082812   40531 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/config.json ...
	I1129 09:29:59.082851   40531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/config.json: {Name:mkb2f106e8d4acad317b06f5df886bb1f9b2bb67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:29:59.082979   40531 start.go:360] acquireMachinesLock for auto-473168: {Name:mke0bd376b87e419ebada00803bbcbb9230316d5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1129 09:29:59.083010   40531 start.go:364] duration metric: took 18.699µs to acquireMachinesLock for "auto-473168"
	I1129 09:29:59.083032   40531 start.go:93] Provisioning new machine with config: &{Name:auto-473168 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.34.1 ClusterName:auto-473168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:29:59.083099   40531 start.go:125] createHost starting for "" (driver="kvm2")
	I1129 09:29:57.082123   35232 logs.go:123] Gathering logs for etcd [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5] ...
	I1129 09:29:57.082160   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:29:57.125947   35232 logs.go:123] Gathering logs for kube-proxy [3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df] ...
	I1129 09:29:57.125984   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:29:57.168068   35232 logs.go:123] Gathering logs for container status ...
	I1129 09:29:57.168101   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:29:57.222774   35232 logs.go:123] Gathering logs for kubelet ...
	I1129 09:29:57.222871   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:29:57.331532   35232 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:29:57.331580   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:29:57.403700   35232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:29:57.403728   35232 logs.go:123] Gathering logs for coredns [c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a] ...
	I1129 09:29:57.403748   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:29:57.453928   35232 logs.go:123] Gathering logs for kube-proxy [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9] ...
	I1129 09:29:57.453967   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:29:57.525162   35232 logs.go:123] Gathering logs for kube-scheduler [a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126] ...
	I1129 09:29:57.525204   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:29:57.568113   35232 logs.go:123] Gathering logs for storage-provisioner [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666] ...
	I1129 09:29:57.568149   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:29:57.611335   35232 logs.go:123] Gathering logs for kube-controller-manager [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6] ...
	I1129 09:29:57.611369   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:29:57.656563   35232 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:29:57.656594   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:30:00.550760   35232 api_server.go:253] Checking apiserver healthz at https://192.168.72.99:8443/healthz ...
	I1129 09:30:00.551607   35232 api_server.go:269] stopped: https://192.168.72.99:8443/healthz: Get "https://192.168.72.99:8443/healthz": dial tcp 192.168.72.99:8443: connect: connection refused
	I1129 09:30:00.551722   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:30:00.551805   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:30:00.601757   35232 cri.go:89] found id: "d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:00.601784   35232 cri.go:89] found id: ""
	I1129 09:30:00.601796   35232 logs.go:282] 1 containers: [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d]
	I1129 09:30:00.601883   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.606491   35232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:30:00.606590   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:30:00.645604   35232 cri.go:89] found id: "2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:00.645633   35232 cri.go:89] found id: ""
	I1129 09:30:00.645644   35232 logs.go:282] 1 containers: [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5]
	I1129 09:30:00.645695   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.650938   35232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:30:00.651040   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:30:00.698953   35232 cri.go:89] found id: "5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:00.698975   35232 cri.go:89] found id: "c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:00.698979   35232 cri.go:89] found id: ""
	I1129 09:30:00.698989   35232 logs.go:282] 2 containers: [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a]
	I1129 09:30:00.699058   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.704079   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.709207   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:30:00.709312   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:30:00.747252   35232 cri.go:89] found id: "904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:00.747280   35232 cri.go:89] found id: "a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:00.747286   35232 cri.go:89] found id: ""
	I1129 09:30:00.747296   35232 logs.go:282] 2 containers: [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126]
	I1129 09:30:00.747361   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.752150   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.756718   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:30:00.756793   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:30:00.799717   35232 cri.go:89] found id: "3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:00.799747   35232 cri.go:89] found id: "3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:00.799756   35232 cri.go:89] found id: ""
	I1129 09:30:00.799766   35232 logs.go:282] 2 containers: [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df]
	I1129 09:30:00.799867   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.804621   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.808682   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:30:00.808764   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:30:00.859492   35232 cri.go:89] found id: "b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:00.859529   35232 cri.go:89] found id: ""
	I1129 09:30:00.859539   35232 logs.go:282] 1 containers: [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6]
	I1129 09:30:00.859598   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.865176   35232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:30:00.865254   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:30:00.904032   35232 cri.go:89] found id: ""
	I1129 09:30:00.904064   35232 logs.go:282] 0 containers: []
	W1129 09:30:00.904071   35232 logs.go:284] No container was found matching "kindnet"
	I1129 09:30:00.904077   35232 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:30:00.904130   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:30:00.941697   35232 cri.go:89] found id: "60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:00.941724   35232 cri.go:89] found id: ""
	I1129 09:30:00.941736   35232 logs.go:282] 1 containers: [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666]
	I1129 09:30:00.941796   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:00.947067   35232 logs.go:123] Gathering logs for dmesg ...
	I1129 09:30:00.947103   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:30:00.961976   35232 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:30:00.962007   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:30:01.037057   35232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:30:01.037102   35232 logs.go:123] Gathering logs for coredns [c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a] ...
	I1129 09:30:01.037120   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:01.079388   35232 logs.go:123] Gathering logs for kube-scheduler [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b] ...
	I1129 09:30:01.079417   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:01.174863   35232 logs.go:123] Gathering logs for kube-scheduler [a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126] ...
	I1129 09:30:01.174897   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:01.218909   35232 logs.go:123] Gathering logs for kube-proxy [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9] ...
	I1129 09:30:01.218941   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:01.270420   35232 logs.go:123] Gathering logs for kube-proxy [3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df] ...
	I1129 09:30:01.270467   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:01.310765   35232 logs.go:123] Gathering logs for kube-controller-manager [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6] ...
	I1129 09:30:01.310814   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:01.353703   35232 logs.go:123] Gathering logs for kubelet ...
	I1129 09:30:01.353734   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:30:01.456399   35232 logs.go:123] Gathering logs for kube-apiserver [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d] ...
	I1129 09:30:01.456438   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:01.500516   35232 logs.go:123] Gathering logs for etcd [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5] ...
	I1129 09:30:01.500554   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:01.547853   35232 logs.go:123] Gathering logs for coredns [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b] ...
	I1129 09:30:01.547896   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:01.601869   35232 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:30:01.601906   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:30:01.957212   35232 logs.go:123] Gathering logs for storage-provisioner [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666] ...
	I1129 09:30:01.957254   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:02.001158   35232 logs.go:123] Gathering logs for container status ...
	I1129 09:30:02.001187   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:29:58.347303   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1129 09:29:58.347371   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:29:59.084611   40531 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1129 09:29:59.084764   40531 start.go:159] libmachine.API.Create for "auto-473168" (driver="kvm2")
	I1129 09:29:59.084801   40531 client.go:173] LocalClient.Create starting
	I1129 09:29:59.084898   40531 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem
	I1129 09:29:59.084939   40531 main.go:143] libmachine: Decoding PEM data...
	I1129 09:29:59.084962   40531 main.go:143] libmachine: Parsing certificate...
	I1129 09:29:59.085040   40531 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem
	I1129 09:29:59.085071   40531 main.go:143] libmachine: Decoding PEM data...
	I1129 09:29:59.085089   40531 main.go:143] libmachine: Parsing certificate...
	I1129 09:29:59.085426   40531 main.go:143] libmachine: creating domain...
	I1129 09:29:59.085444   40531 main.go:143] libmachine: creating network...
	I1129 09:29:59.086914   40531 main.go:143] libmachine: found existing default network
	I1129 09:29:59.087164   40531 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1129 09:29:59.088154   40531 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6c:28:57} reservation:<nil>}
	I1129 09:29:59.089063   40531 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bfc930}
	I1129 09:29:59.089152   40531 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-auto-473168</name>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1129 09:29:59.095810   40531 main.go:143] libmachine: creating private network mk-auto-473168 192.168.50.0/24...
	I1129 09:29:59.178459   40531 main.go:143] libmachine: private network mk-auto-473168 192.168.50.0/24 created
	I1129 09:29:59.178793   40531 main.go:143] libmachine: <network>
	  <name>mk-auto-473168</name>
	  <uuid>cebc8e5d-2842-4160-862e-c2a1a73ad036</uuid>
	  <bridge name='virbr2' stp='on' delay='0'/>
	  <mac address='52:54:00:5c:7e:b3'/>
	  <dns enable='no'/>
	  <ip address='192.168.50.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.50.2' end='192.168.50.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1129 09:29:59.178846   40531 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168 ...
	I1129 09:29:59.178873   40531 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22000-5651/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1129 09:29:59.178906   40531 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22000-5651/.minikube
	I1129 09:29:59.178995   40531 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22000-5651/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22000-5651/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso...
	I1129 09:29:59.427038   40531 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/id_rsa...
	I1129 09:29:59.489929   40531 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/auto-473168.rawdisk...
	I1129 09:29:59.489973   40531 main.go:143] libmachine: Writing magic tar header
	I1129 09:29:59.490018   40531 main.go:143] libmachine: Writing SSH key tar header
	I1129 09:29:59.490096   40531 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168 ...
	I1129 09:29:59.490153   40531 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168
	I1129 09:29:59.490189   40531 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168 (perms=drwx------)
	I1129 09:29:59.490205   40531 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22000-5651/.minikube/machines
	I1129 09:29:59.490220   40531 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22000-5651/.minikube/machines (perms=drwxr-xr-x)
	I1129 09:29:59.490232   40531 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22000-5651/.minikube
	I1129 09:29:59.490240   40531 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22000-5651/.minikube (perms=drwxr-xr-x)
	I1129 09:29:59.490249   40531 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22000-5651
	I1129 09:29:59.490257   40531 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22000-5651 (perms=drwxrwxr-x)
	I1129 09:29:59.490266   40531 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1129 09:29:59.490274   40531 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1129 09:29:59.490285   40531 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1129 09:29:59.490292   40531 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1129 09:29:59.490300   40531 main.go:143] libmachine: checking permissions on dir: /home
	I1129 09:29:59.490306   40531 main.go:143] libmachine: skipping /home - not owner
	I1129 09:29:59.490313   40531 main.go:143] libmachine: defining domain...
	I1129 09:29:59.491769   40531 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>auto-473168</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/auto-473168.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-auto-473168'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1129 09:29:59.497398   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a5:a4:9a in network default
	I1129 09:29:59.497973   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:29:59.497993   40531 main.go:143] libmachine: starting domain...
	I1129 09:29:59.497997   40531 main.go:143] libmachine: ensuring networks are active...
	I1129 09:29:59.498853   40531 main.go:143] libmachine: Ensuring network default is active
	I1129 09:29:59.499243   40531 main.go:143] libmachine: Ensuring network mk-auto-473168 is active
	I1129 09:29:59.499908   40531 main.go:143] libmachine: getting domain XML...
	I1129 09:29:59.500931   40531 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>auto-473168</name>
	  <uuid>12b9578d-d9c7-4043-80ea-3410fd280c4f</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/auto-473168.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:a2:da:2e'/>
	      <source network='mk-auto-473168'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:a5:a4:9a'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1129 09:30:00.866038   40531 main.go:143] libmachine: waiting for domain to start...
	I1129 09:30:00.867616   40531 main.go:143] libmachine: domain is now running
	I1129 09:30:00.867640   40531 main.go:143] libmachine: waiting for IP...
	I1129 09:30:00.868710   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:00.869444   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:00.869459   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:00.869855   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:00.869895   40531 retry.go:31] will retry after 258.067011ms: waiting for domain to come up
	I1129 09:30:01.129631   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:01.130538   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:01.130560   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:01.130984   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:01.131025   40531 retry.go:31] will retry after 285.642559ms: waiting for domain to come up
	I1129 09:30:01.418751   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:01.419384   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:01.419397   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:01.419811   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:01.419870   40531 retry.go:31] will retry after 482.162859ms: waiting for domain to come up
	I1129 09:30:01.903262   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:01.904058   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:01.904072   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:01.904437   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:01.904476   40531 retry.go:31] will retry after 590.074753ms: waiting for domain to come up
	I1129 09:30:02.496529   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:02.497291   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:02.497316   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:02.497695   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:02.497738   40531 retry.go:31] will retry after 498.758845ms: waiting for domain to come up
	I1129 09:30:02.998688   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:02.999492   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:02.999522   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:02.999906   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:02.999940   40531 retry.go:31] will retry after 892.428522ms: waiting for domain to come up
	I1129 09:30:03.894011   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:03.894618   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:03.894635   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:03.895032   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:03.895073   40531 retry.go:31] will retry after 1.071001925s: waiting for domain to come up
	I1129 09:30:04.553102   35232 api_server.go:253] Checking apiserver healthz at https://192.168.72.99:8443/healthz ...
	I1129 09:30:04.553699   35232 api_server.go:269] stopped: https://192.168.72.99:8443/healthz: Get "https://192.168.72.99:8443/healthz": dial tcp 192.168.72.99:8443: connect: connection refused
	I1129 09:30:04.553757   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:30:04.553857   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:30:04.593327   35232 cri.go:89] found id: "d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:04.593353   35232 cri.go:89] found id: ""
	I1129 09:30:04.593362   35232 logs.go:282] 1 containers: [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d]
	I1129 09:30:04.593427   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.597597   35232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:30:04.597670   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:30:04.634681   35232 cri.go:89] found id: "2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:04.634706   35232 cri.go:89] found id: ""
	I1129 09:30:04.634716   35232 logs.go:282] 1 containers: [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5]
	I1129 09:30:04.634794   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.640571   35232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:30:04.640664   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:30:04.681526   35232 cri.go:89] found id: "5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:04.681553   35232 cri.go:89] found id: "c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:04.681560   35232 cri.go:89] found id: ""
	I1129 09:30:04.681570   35232 logs.go:282] 2 containers: [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a]
	I1129 09:30:04.681634   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.686228   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.690729   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:30:04.690823   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:30:04.737669   35232 cri.go:89] found id: "904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:04.737692   35232 cri.go:89] found id: "a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:04.737698   35232 cri.go:89] found id: ""
	I1129 09:30:04.737707   35232 logs.go:282] 2 containers: [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126]
	I1129 09:30:04.737773   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.743040   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.748184   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:30:04.748252   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:30:04.788461   35232 cri.go:89] found id: "3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:04.788488   35232 cri.go:89] found id: "3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:04.788494   35232 cri.go:89] found id: ""
	I1129 09:30:04.788506   35232 logs.go:282] 2 containers: [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df]
	I1129 09:30:04.788600   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.795016   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.800315   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:30:04.800396   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:30:04.838636   35232 cri.go:89] found id: "b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:04.838667   35232 cri.go:89] found id: ""
	I1129 09:30:04.838678   35232 logs.go:282] 1 containers: [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6]
	I1129 09:30:04.838752   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.843352   35232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:30:04.843429   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:30:04.881659   35232 cri.go:89] found id: ""
	I1129 09:30:04.881700   35232 logs.go:282] 0 containers: []
	W1129 09:30:04.881712   35232 logs.go:284] No container was found matching "kindnet"
	I1129 09:30:04.881721   35232 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:30:04.881782   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:30:04.918462   35232 cri.go:89] found id: "60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:04.918489   35232 cri.go:89] found id: ""
	I1129 09:30:04.918500   35232 logs.go:282] 1 containers: [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666]
	I1129 09:30:04.918564   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:04.923029   35232 logs.go:123] Gathering logs for etcd [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5] ...
	I1129 09:30:04.923057   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:04.969643   35232 logs.go:123] Gathering logs for coredns [c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a] ...
	I1129 09:30:04.969671   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:05.011586   35232 logs.go:123] Gathering logs for kube-proxy [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9] ...
	I1129 09:30:05.011629   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:05.075558   35232 logs.go:123] Gathering logs for storage-provisioner [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666] ...
	I1129 09:30:05.075596   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:05.116843   35232 logs.go:123] Gathering logs for container status ...
	I1129 09:30:05.116874   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:30:05.163867   35232 logs.go:123] Gathering logs for kubelet ...
	I1129 09:30:05.163899   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:30:05.261927   35232 logs.go:123] Gathering logs for kube-scheduler [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b] ...
	I1129 09:30:05.261975   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:05.358603   35232 logs.go:123] Gathering logs for kube-proxy [3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df] ...
	I1129 09:30:05.358646   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:05.400236   35232 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:30:05.400269   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:30:05.740978   35232 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:30:05.741014   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:30:05.811481   35232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:30:05.811510   35232 logs.go:123] Gathering logs for kube-scheduler [a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126] ...
	I1129 09:30:05.811532   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:05.849963   35232 logs.go:123] Gathering logs for kube-controller-manager [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6] ...
	I1129 09:30:05.849996   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:05.897347   35232 logs.go:123] Gathering logs for dmesg ...
	I1129 09:30:05.897384   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:30:05.912214   35232 logs.go:123] Gathering logs for kube-apiserver [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d] ...
	I1129 09:30:05.912249   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:05.956575   35232 logs.go:123] Gathering logs for coredns [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b] ...
	I1129 09:30:05.956607   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:03.348490   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1129 09:30:03.348535   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:04.968236   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:04.968983   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:04.968999   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:04.969360   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:04.969394   40531 retry.go:31] will retry after 1.18871546s: waiting for domain to come up
	I1129 09:30:06.159423   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:06.160184   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:06.160205   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:06.160661   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:06.160717   40531 retry.go:31] will retry after 1.576835139s: waiting for domain to come up
	I1129 09:30:07.739409   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:07.740176   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:07.740196   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:07.740642   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:07.740678   40531 retry.go:31] will retry after 2.234982579s: waiting for domain to come up
	I1129 09:30:08.515350   35232 api_server.go:253] Checking apiserver healthz at https://192.168.72.99:8443/healthz ...
	I1129 09:30:08.516205   35232 api_server.go:269] stopped: https://192.168.72.99:8443/healthz: Get "https://192.168.72.99:8443/healthz": dial tcp 192.168.72.99:8443: connect: connection refused
	I1129 09:30:08.516261   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:30:08.516315   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:30:08.562042   35232 cri.go:89] found id: "d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:08.562069   35232 cri.go:89] found id: ""
	I1129 09:30:08.562080   35232 logs.go:282] 1 containers: [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d]
	I1129 09:30:08.562146   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.568910   35232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:30:08.569004   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:30:08.614625   35232 cri.go:89] found id: "2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:08.614661   35232 cri.go:89] found id: ""
	I1129 09:30:08.614673   35232 logs.go:282] 1 containers: [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5]
	I1129 09:30:08.614765   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.621168   35232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:30:08.621260   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:30:08.665974   35232 cri.go:89] found id: "5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:08.666010   35232 cri.go:89] found id: "c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:08.666016   35232 cri.go:89] found id: ""
	I1129 09:30:08.666024   35232 logs.go:282] 2 containers: [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a]
	I1129 09:30:08.666087   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.672471   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.679006   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:30:08.679097   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:30:08.722306   35232 cri.go:89] found id: "904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:08.722336   35232 cri.go:89] found id: "a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:08.722343   35232 cri.go:89] found id: ""
	I1129 09:30:08.722352   35232 logs.go:282] 2 containers: [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126]
	I1129 09:30:08.722425   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.727478   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.732128   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:30:08.732207   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:30:08.775146   35232 cri.go:89] found id: "3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:08.775173   35232 cri.go:89] found id: "3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:08.775179   35232 cri.go:89] found id: ""
	I1129 09:30:08.775188   35232 logs.go:282] 2 containers: [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df]
	I1129 09:30:08.775246   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.781486   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.785840   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:30:08.785921   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:30:08.825241   35232 cri.go:89] found id: "b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:08.825273   35232 cri.go:89] found id: ""
	I1129 09:30:08.825283   35232 logs.go:282] 1 containers: [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6]
	I1129 09:30:08.825355   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.830641   35232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:30:08.830717   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:30:08.868691   35232 cri.go:89] found id: ""
	I1129 09:30:08.868722   35232 logs.go:282] 0 containers: []
	W1129 09:30:08.868733   35232 logs.go:284] No container was found matching "kindnet"
	I1129 09:30:08.868741   35232 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:30:08.868848   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:30:08.913200   35232 cri.go:89] found id: "60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:08.913230   35232 cri.go:89] found id: ""
	I1129 09:30:08.913240   35232 logs.go:282] 1 containers: [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666]
	I1129 09:30:08.913309   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:08.917860   35232 logs.go:123] Gathering logs for kubelet ...
	I1129 09:30:08.917896   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:30:09.046616   35232 logs.go:123] Gathering logs for kube-apiserver [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d] ...
	I1129 09:30:09.046655   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:09.095604   35232 logs.go:123] Gathering logs for coredns [c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a] ...
	I1129 09:30:09.095659   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:09.134904   35232 logs.go:123] Gathering logs for storage-provisioner [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666] ...
	I1129 09:30:09.134944   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:09.181643   35232 logs.go:123] Gathering logs for coredns [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b] ...
	I1129 09:30:09.181680   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:09.231534   35232 logs.go:123] Gathering logs for kube-scheduler [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b] ...
	I1129 09:30:09.231571   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:09.319369   35232 logs.go:123] Gathering logs for kube-scheduler [a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126] ...
	I1129 09:30:09.319409   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:09.363625   35232 logs.go:123] Gathering logs for kube-proxy [3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df] ...
	I1129 09:30:09.363656   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:09.423019   35232 logs.go:123] Gathering logs for dmesg ...
	I1129 09:30:09.423050   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:30:09.442809   35232 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:30:09.442855   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:30:09.515736   35232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:30:09.515767   35232 logs.go:123] Gathering logs for etcd [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5] ...
	I1129 09:30:09.515787   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:09.571382   35232 logs.go:123] Gathering logs for kube-proxy [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9] ...
	I1129 09:30:09.571417   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:09.650842   35232 logs.go:123] Gathering logs for kube-controller-manager [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6] ...
	I1129 09:30:09.650900   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:09.704463   35232 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:30:09.704501   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:30:10.056788   35232 logs.go:123] Gathering logs for container status ...
	I1129 09:30:10.056844   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:30:08.349600   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1129 09:30:08.349666   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:12.202995   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": read tcp 192.168.83.1:51072->192.168.83.104:8443: read: connection reset by peer
	I1129 09:30:12.203070   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:12.203702   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": dial tcp 192.168.83.104:8443: connect: connection refused
	I1129 09:30:12.347023   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:12.347727   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": dial tcp 192.168.83.104:8443: connect: connection refused
	I1129 09:30:12.846959   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:12.847804   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": dial tcp 192.168.83.104:8443: connect: connection refused
	I1129 09:30:09.977305   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:09.978201   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:09.978229   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:09.978739   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:09.978788   40531 retry.go:31] will retry after 1.868339444s: waiting for domain to come up
	I1129 09:30:11.850107   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:11.850880   40531 main.go:143] libmachine: no network interface addresses found for domain auto-473168 (source=lease)
	I1129 09:30:11.850905   40531 main.go:143] libmachine: trying to list again with source=arp
	I1129 09:30:11.851459   40531 main.go:143] libmachine: unable to find current IP address of domain auto-473168 in network mk-auto-473168 (interfaces detected: [])
	I1129 09:30:11.851508   40531 retry.go:31] will retry after 3.137454875s: waiting for domain to come up
	I1129 09:30:12.611397   35232 api_server.go:253] Checking apiserver healthz at https://192.168.72.99:8443/healthz ...
	I1129 09:30:12.612062   35232 api_server.go:269] stopped: https://192.168.72.99:8443/healthz: Get "https://192.168.72.99:8443/healthz": dial tcp 192.168.72.99:8443: connect: connection refused
	I1129 09:30:12.612123   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:30:12.612171   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:30:12.650044   35232 cri.go:89] found id: "d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:12.650068   35232 cri.go:89] found id: ""
	I1129 09:30:12.650077   35232 logs.go:282] 1 containers: [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d]
	I1129 09:30:12.650141   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.654621   35232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:30:12.654694   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:30:12.691402   35232 cri.go:89] found id: "2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:12.691430   35232 cri.go:89] found id: ""
	I1129 09:30:12.691438   35232 logs.go:282] 1 containers: [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5]
	I1129 09:30:12.691492   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.700758   35232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:30:12.700855   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:30:12.737195   35232 cri.go:89] found id: "5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:12.737216   35232 cri.go:89] found id: "c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:12.737221   35232 cri.go:89] found id: ""
	I1129 09:30:12.737228   35232 logs.go:282] 2 containers: [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a]
	I1129 09:30:12.737280   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.741355   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.745996   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:30:12.746071   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:30:12.783205   35232 cri.go:89] found id: "904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:12.783226   35232 cri.go:89] found id: "a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:12.783230   35232 cri.go:89] found id: ""
	I1129 09:30:12.783237   35232 logs.go:282] 2 containers: [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126]
	I1129 09:30:12.783288   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.787559   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.791452   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:30:12.791524   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:30:12.827692   35232 cri.go:89] found id: "3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:12.827724   35232 cri.go:89] found id: "3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:12.827731   35232 cri.go:89] found id: ""
	I1129 09:30:12.827741   35232 logs.go:282] 2 containers: [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df]
	I1129 09:30:12.827804   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.832115   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.836391   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:30:12.836470   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:30:12.870447   35232 cri.go:89] found id: "b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:12.870470   35232 cri.go:89] found id: ""
	I1129 09:30:12.870482   35232 logs.go:282] 1 containers: [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6]
	I1129 09:30:12.870547   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.875072   35232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:30:12.875150   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:30:12.909258   35232 cri.go:89] found id: ""
	I1129 09:30:12.909284   35232 logs.go:282] 0 containers: []
	W1129 09:30:12.909291   35232 logs.go:284] No container was found matching "kindnet"
	I1129 09:30:12.909297   35232 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:30:12.909356   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:30:12.946080   35232 cri.go:89] found id: "60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:12.946117   35232 cri.go:89] found id: ""
	I1129 09:30:12.946127   35232 logs.go:282] 1 containers: [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666]
	I1129 09:30:12.946197   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:12.950511   35232 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:30:12.950534   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:30:13.289341   35232 logs.go:123] Gathering logs for kubelet ...
	I1129 09:30:13.289377   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:30:13.390163   35232 logs.go:123] Gathering logs for kube-scheduler [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b] ...
	I1129 09:30:13.390199   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:13.480719   35232 logs.go:123] Gathering logs for kube-controller-manager [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6] ...
	I1129 09:30:13.480759   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:13.517823   35232 logs.go:123] Gathering logs for kube-apiserver [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d] ...
	I1129 09:30:13.517867   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:13.559328   35232 logs.go:123] Gathering logs for etcd [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5] ...
	I1129 09:30:13.559382   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:13.605760   35232 logs.go:123] Gathering logs for coredns [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b] ...
	I1129 09:30:13.605799   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:13.667862   35232 logs.go:123] Gathering logs for storage-provisioner [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666] ...
	I1129 09:30:13.667915   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:13.724161   35232 logs.go:123] Gathering logs for container status ...
	I1129 09:30:13.724203   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:30:13.771564   35232 logs.go:123] Gathering logs for coredns [c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a] ...
	I1129 09:30:13.771605   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:13.809153   35232 logs.go:123] Gathering logs for kube-proxy [3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df] ...
	I1129 09:30:13.809190   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:13.853511   35232 logs.go:123] Gathering logs for dmesg ...
	I1129 09:30:13.853543   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:30:13.869414   35232 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:30:13.869448   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:30:13.944782   35232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:30:13.944811   35232 logs.go:123] Gathering logs for kube-scheduler [a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126] ...
	I1129 09:30:13.944842   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:13.982026   35232 logs.go:123] Gathering logs for kube-proxy [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9] ...
	I1129 09:30:13.982061   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:16.536933   35232 api_server.go:253] Checking apiserver healthz at https://192.168.72.99:8443/healthz ...
	I1129 09:30:16.537672   35232 api_server.go:269] stopped: https://192.168.72.99:8443/healthz: Get "https://192.168.72.99:8443/healthz": dial tcp 192.168.72.99:8443: connect: connection refused
	I1129 09:30:16.537733   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1129 09:30:16.537793   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1129 09:30:16.583935   35232 cri.go:89] found id: "d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:16.583951   35232 cri.go:89] found id: ""
	I1129 09:30:16.583961   35232 logs.go:282] 1 containers: [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d]
	I1129 09:30:16.584010   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.588618   35232 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1129 09:30:16.588689   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1129 09:30:16.630951   35232 cri.go:89] found id: "2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:16.630972   35232 cri.go:89] found id: ""
	I1129 09:30:16.630980   35232 logs.go:282] 1 containers: [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5]
	I1129 09:30:16.631036   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.635823   35232 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1129 09:30:16.635911   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1129 09:30:16.681389   35232 cri.go:89] found id: "5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:16.681416   35232 cri.go:89] found id: "c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:16.681423   35232 cri.go:89] found id: ""
	I1129 09:30:16.681431   35232 logs.go:282] 2 containers: [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a]
	I1129 09:30:16.681490   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.685871   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.689817   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1129 09:30:16.689908   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1129 09:30:16.735861   35232 cri.go:89] found id: "904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:16.735882   35232 cri.go:89] found id: "a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:16.735887   35232 cri.go:89] found id: ""
	I1129 09:30:16.735895   35232 logs.go:282] 2 containers: [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126]
	I1129 09:30:16.735952   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.740797   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.745955   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1129 09:30:16.746033   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1129 09:30:16.794503   35232 cri.go:89] found id: "3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:16.794548   35232 cri.go:89] found id: "3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:16.794554   35232 cri.go:89] found id: ""
	I1129 09:30:16.794564   35232 logs.go:282] 2 containers: [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df]
	I1129 09:30:16.794621   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.799130   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.803159   35232 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1129 09:30:16.803228   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1129 09:30:16.846625   35232 cri.go:89] found id: "b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:16.846646   35232 cri.go:89] found id: ""
	I1129 09:30:16.846655   35232 logs.go:282] 1 containers: [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6]
	I1129 09:30:16.846705   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.851012   35232 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1129 09:30:16.851082   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1129 09:30:16.899816   35232 cri.go:89] found id: ""
	I1129 09:30:16.899854   35232 logs.go:282] 0 containers: []
	W1129 09:30:16.899862   35232 logs.go:284] No container was found matching "kindnet"
	I1129 09:30:16.899869   35232 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1129 09:30:16.899923   35232 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1129 09:30:16.949008   35232 cri.go:89] found id: "60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:16.949035   35232 cri.go:89] found id: ""
	I1129 09:30:16.949045   35232 logs.go:282] 1 containers: [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666]
	I1129 09:30:16.949111   35232 ssh_runner.go:195] Run: which crictl
	I1129 09:30:16.954816   35232 logs.go:123] Gathering logs for dmesg ...
	I1129 09:30:16.954867   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1129 09:30:16.973593   35232 logs.go:123] Gathering logs for kube-proxy [3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9] ...
	I1129 09:30:16.973633   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3db7ed6fb9520998f5fea2aeaa31e81f69b8195a25eec1ce207f789184fb0bc9"
	I1129 09:30:13.347387   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:13.348076   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": dial tcp 192.168.83.104:8443: connect: connection refused
	I1129 09:30:13.847864   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:13.848640   40298 api_server.go:269] stopped: https://192.168.83.104:8443/healthz: Get "https://192.168.83.104:8443/healthz": dial tcp 192.168.83.104:8443: connect: connection refused
	I1129 09:30:14.347306   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:16.095772   40298 api_server.go:279] https://192.168.83.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1129 09:30:16.095797   40298 api_server.go:103] status: https://192.168.83.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1129 09:30:16.095810   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:16.128362   40298 api_server.go:279] https://192.168.83.104:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1129 09:30:16.128393   40298 api_server.go:103] status: https://192.168.83.104:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1129 09:30:16.347795   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:16.354741   40298 api_server.go:279] https://192.168.83.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:30:16.354775   40298 api_server.go:103] status: https://192.168.83.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:30:16.847194   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:16.852745   40298 api_server.go:279] https://192.168.83.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:30:16.852770   40298 api_server.go:103] status: https://192.168.83.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:30:17.347403   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:17.354266   40298 api_server.go:279] https://192.168.83.104:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1129 09:30:17.354295   40298 api_server.go:103] status: https://192.168.83.104:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1129 09:30:17.846939   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:17.852114   40298 api_server.go:279] https://192.168.83.104:8443/healthz returned 200:
	ok
	I1129 09:30:17.860756   40298 api_server.go:141] control plane version: v1.34.1
	I1129 09:30:17.860790   40298 api_server.go:131] duration metric: took 24.513902411s to wait for apiserver health ...
	I1129 09:30:17.860802   40298 cni.go:84] Creating CNI manager for ""
	I1129 09:30:17.860812   40298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1129 09:30:17.862794   40298 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1129 09:30:17.864382   40298 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1129 09:30:17.887645   40298 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1129 09:30:17.915179   40298 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:30:17.930946   40298 system_pods.go:59] 6 kube-system pods found
	I1129 09:30:17.930991   40298 system_pods.go:61] "coredns-66bc5c9577-4bmms" [64220006-2ede-426c-bd55-8a0c72981851] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:30:17.931002   40298 system_pods.go:61] "etcd-pause-893760" [e4f015d5-b1a6-4405-b118-9db7b7341c41] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:30:17.931012   40298 system_pods.go:61] "kube-apiserver-pause-893760" [3fea2b50-f890-473d-969e-0ff61c070432] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:30:17.931023   40298 system_pods.go:61] "kube-controller-manager-pause-893760" [cdf18de5-80b4-431a-9287-71bbef4a21b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:30:17.931030   40298 system_pods.go:61] "kube-proxy-rzkwr" [8d0fdc57-ce2f-483b-82f2-006931b3ab39] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1129 09:30:17.931037   40298 system_pods.go:61] "kube-scheduler-pause-893760" [fcb17e31-c1eb-4490-9ff2-f3ad36f7b4a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:30:17.931045   40298 system_pods.go:74] duration metric: took 15.840219ms to wait for pod list to return data ...
	I1129 09:30:17.931054   40298 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:30:17.949644   40298 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1129 09:30:17.949681   40298 node_conditions.go:123] node cpu capacity is 2
	I1129 09:30:17.949704   40298 node_conditions.go:105] duration metric: took 18.643586ms to run NodePressure ...
	I1129 09:30:17.949770   40298 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1129 09:30:18.324785   40298 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1129 09:30:18.329443   40298 kubeadm.go:744] kubelet initialised
	I1129 09:30:18.329468   40298 kubeadm.go:745] duration metric: took 4.654312ms waiting for restarted kubelet to initialise ...
	I1129 09:30:18.329487   40298 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1129 09:30:18.353236   40298 ops.go:34] apiserver oom_adj: -16
	I1129 09:30:18.353267   40298 kubeadm.go:602] duration metric: took 27.388549197s to restartPrimaryControlPlane
	I1129 09:30:18.353279   40298 kubeadm.go:403] duration metric: took 27.658955597s to StartCluster
	I1129 09:30:18.353299   40298 settings.go:142] acquiring lock: {Name:mkb0bfd7d63d07772bc8411985c986880254a5d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:18.353410   40298 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22000-5651/kubeconfig
	I1129 09:30:18.354989   40298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/kubeconfig: {Name:mk06369260b11b7542906282ff812e026bce8478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:18.355302   40298 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.104 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1129 09:30:18.355391   40298 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1129 09:30:18.355594   40298 config.go:182] Loaded profile config "pause-893760": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:30:18.358261   40298 out.go:179] * Verifying Kubernetes components...
	I1129 09:30:18.358290   40298 out.go:179] * Enabled addons: 
	I1129 09:30:14.990078   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:14.990922   40531 main.go:143] libmachine: domain auto-473168 has current primary IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:14.990938   40531 main.go:143] libmachine: found domain IP: 192.168.50.142
	I1129 09:30:14.990945   40531 main.go:143] libmachine: reserving static IP address...
	I1129 09:30:14.991398   40531 main.go:143] libmachine: unable to find host DHCP lease matching {name: "auto-473168", mac: "52:54:00:a2:da:2e", ip: "192.168.50.142"} in network mk-auto-473168
	I1129 09:30:15.237104   40531 main.go:143] libmachine: reserved static IP address 192.168.50.142 for domain auto-473168
	I1129 09:30:15.237132   40531 main.go:143] libmachine: waiting for SSH...
	I1129 09:30:15.237148   40531 main.go:143] libmachine: Getting to WaitForSSH function...
	I1129 09:30:15.240985   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.241605   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:15.241635   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.241890   40531 main.go:143] libmachine: Using SSH client type: native
	I1129 09:30:15.242127   40531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1129 09:30:15.242139   40531 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1129 09:30:15.352530   40531 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:30:15.353087   40531 main.go:143] libmachine: domain creation complete
	I1129 09:30:15.355012   40531 machine.go:94] provisionDockerMachine start ...
	I1129 09:30:15.357987   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.358462   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:15.358491   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.358682   40531 main.go:143] libmachine: Using SSH client type: native
	I1129 09:30:15.358977   40531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1129 09:30:15.358999   40531 main.go:143] libmachine: About to run SSH command:
	hostname
	I1129 09:30:15.469789   40531 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1129 09:30:15.469862   40531 buildroot.go:166] provisioning hostname "auto-473168"
	I1129 09:30:15.473205   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.473757   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:15.473807   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.474061   40531 main.go:143] libmachine: Using SSH client type: native
	I1129 09:30:15.474306   40531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1129 09:30:15.474326   40531 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-473168 && echo "auto-473168" | sudo tee /etc/hostname
	I1129 09:30:15.609354   40531 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-473168
	
	I1129 09:30:15.613239   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.613747   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:15.613792   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.614045   40531 main.go:143] libmachine: Using SSH client type: native
	I1129 09:30:15.614352   40531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1129 09:30:15.614378   40531 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-473168' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-473168/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-473168' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1129 09:30:15.731159   40531 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1129 09:30:15.731209   40531 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22000-5651/.minikube CaCertPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22000-5651/.minikube}
	I1129 09:30:15.731254   40531 buildroot.go:174] setting up certificates
	I1129 09:30:15.731269   40531 provision.go:84] configureAuth start
	I1129 09:30:15.734244   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.734670   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:15.734693   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.737304   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.737747   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:15.737774   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.737962   40531 provision.go:143] copyHostCerts
	I1129 09:30:15.738033   40531 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5651/.minikube/ca.pem, removing ...
	I1129 09:30:15.738048   40531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5651/.minikube/ca.pem
	I1129 09:30:15.738141   40531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22000-5651/.minikube/ca.pem (1082 bytes)
	I1129 09:30:15.738245   40531 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5651/.minikube/cert.pem, removing ...
	I1129 09:30:15.738260   40531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5651/.minikube/cert.pem
	I1129 09:30:15.738290   40531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22000-5651/.minikube/cert.pem (1123 bytes)
	I1129 09:30:15.738349   40531 exec_runner.go:144] found /home/jenkins/minikube-integration/22000-5651/.minikube/key.pem, removing ...
	I1129 09:30:15.738356   40531 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22000-5651/.minikube/key.pem
	I1129 09:30:15.738378   40531 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22000-5651/.minikube/key.pem (1679 bytes)
	I1129 09:30:15.738442   40531 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22000-5651/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca-key.pem org=jenkins.auto-473168 san=[127.0.0.1 192.168.50.142 auto-473168 localhost minikube]
	I1129 09:30:15.837963   40531 provision.go:177] copyRemoteCerts
	I1129 09:30:15.838043   40531 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1129 09:30:15.841894   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.842402   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:15.842449   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:15.842635   40531 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/id_rsa Username:docker}
	I1129 09:30:15.932336   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1129 09:30:15.968366   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1129 09:30:16.006925   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1129 09:30:16.040753   40531 provision.go:87] duration metric: took 309.466886ms to configureAuth
	I1129 09:30:16.040784   40531 buildroot.go:189] setting minikube options for container-runtime
	I1129 09:30:16.040988   40531 config.go:182] Loaded profile config "auto-473168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:30:16.044568   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.045175   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:16.045203   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.045440   40531 main.go:143] libmachine: Using SSH client type: native
	I1129 09:30:16.045788   40531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1129 09:30:16.045821   40531 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1129 09:30:16.313142   40531 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1129 09:30:16.313173   40531 machine.go:97] duration metric: took 958.140926ms to provisionDockerMachine
	I1129 09:30:16.313189   40531 client.go:176] duration metric: took 17.228376368s to LocalClient.Create
	I1129 09:30:16.313210   40531 start.go:167] duration metric: took 17.228446593s to libmachine.API.Create "auto-473168"
	I1129 09:30:16.313221   40531 start.go:293] postStartSetup for "auto-473168" (driver="kvm2")
	I1129 09:30:16.313234   40531 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1129 09:30:16.313316   40531 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1129 09:30:16.317190   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.317844   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:16.317885   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.318111   40531 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/id_rsa Username:docker}
	I1129 09:30:16.404732   40531 ssh_runner.go:195] Run: cat /etc/os-release
	I1129 09:30:16.409948   40531 info.go:137] Remote host: Buildroot 2025.02
	I1129 09:30:16.409984   40531 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5651/.minikube/addons for local assets ...
	I1129 09:30:16.410055   40531 filesync.go:126] Scanning /home/jenkins/minikube-integration/22000-5651/.minikube/files for local assets ...
	I1129 09:30:16.410130   40531 filesync.go:149] local asset: /home/jenkins/minikube-integration/22000-5651/.minikube/files/etc/ssl/certs/96132.pem -> 96132.pem in /etc/ssl/certs
	I1129 09:30:16.410277   40531 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1129 09:30:16.423910   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/files/etc/ssl/certs/96132.pem --> /etc/ssl/certs/96132.pem (1708 bytes)
	I1129 09:30:16.455139   40531 start.go:296] duration metric: took 141.90363ms for postStartSetup
	I1129 09:30:16.459469   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.460716   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:16.460750   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.461166   40531 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/config.json ...
	I1129 09:30:16.461374   40531 start.go:128] duration metric: took 17.378264826s to createHost
	I1129 09:30:16.464280   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.464775   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:16.464801   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.464980   40531 main.go:143] libmachine: Using SSH client type: native
	I1129 09:30:16.465198   40531 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 192.168.50.142 22 <nil> <nil>}
	I1129 09:30:16.465214   40531 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1129 09:30:16.573908   40531 main.go:143] libmachine: SSH cmd err, output: <nil>: 1764408616.532817684
	
	I1129 09:30:16.573941   40531 fix.go:216] guest clock: 1764408616.532817684
	I1129 09:30:16.573951   40531 fix.go:229] Guest: 2025-11-29 09:30:16.532817684 +0000 UTC Remote: 2025-11-29 09:30:16.461396315 +0000 UTC m=+17.492663956 (delta=71.421369ms)
	I1129 09:30:16.573972   40531 fix.go:200] guest clock delta is within tolerance: 71.421369ms
	I1129 09:30:16.573979   40531 start.go:83] releasing machines lock for "auto-473168", held for 17.49095756s
	I1129 09:30:16.577459   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.578120   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:16.578155   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.578784   40531 ssh_runner.go:195] Run: cat /version.json
	I1129 09:30:16.578816   40531 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1129 09:30:16.582709   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.582927   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.583272   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:16.583305   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:16.583308   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.583335   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:16.583559   40531 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/id_rsa Username:docker}
	I1129 09:30:16.583561   40531 sshutil.go:53] new ssh client: &{IP:192.168.50.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/auto-473168/id_rsa Username:docker}
	I1129 09:30:16.695109   40531 ssh_runner.go:195] Run: systemctl --version
	I1129 09:30:16.703078   40531 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1129 09:30:16.874865   40531 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1129 09:30:16.881861   40531 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1129 09:30:16.881975   40531 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1129 09:30:16.912248   40531 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1129 09:30:16.912281   40531 start.go:496] detecting cgroup driver to use...
	I1129 09:30:16.912362   40531 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1129 09:30:16.934073   40531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1129 09:30:16.962395   40531 docker.go:218] disabling cri-docker service (if available) ...
	I1129 09:30:16.962472   40531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1129 09:30:16.982957   40531 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1129 09:30:17.000541   40531 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1129 09:30:17.220267   40531 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1129 09:30:17.468862   40531 docker.go:234] disabling docker service ...
	I1129 09:30:17.468928   40531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1129 09:30:17.488703   40531 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1129 09:30:17.507554   40531 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1129 09:30:17.730483   40531 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1129 09:30:17.926142   40531 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1129 09:30:17.947566   40531 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1129 09:30:17.979464   40531 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1129 09:30:17.979571   40531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:30:17.993622   40531 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1129 09:30:17.993695   40531 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:30:18.010484   40531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:30:18.026166   40531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:30:18.040388   40531 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1129 09:30:18.055169   40531 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:30:18.068695   40531 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:30:18.093322   40531 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1129 09:30:18.107183   40531 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1129 09:30:18.121797   40531 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1129 09:30:18.121887   40531 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1129 09:30:18.144951   40531 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1129 09:30:18.161321   40531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:30:18.340687   40531 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1129 09:30:18.481488   40531 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1129 09:30:18.481582   40531 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1129 09:30:18.488806   40531 start.go:564] Will wait 60s for crictl version
	I1129 09:30:18.488893   40531 ssh_runner.go:195] Run: which crictl
	I1129 09:30:18.493636   40531 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1129 09:30:18.536697   40531 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1129 09:30:18.536787   40531 ssh_runner.go:195] Run: crio --version
	I1129 09:30:18.573081   40531 ssh_runner.go:195] Run: crio --version
	I1129 09:30:18.607893   40531 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1129 09:30:18.612851   40531 main.go:143] libmachine: domain auto-473168 has defined MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:18.613441   40531 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a2:da:2e", ip: ""} in network mk-auto-473168: {Iface:virbr2 ExpiryTime:2025-11-29 10:30:14 +0000 UTC Type:0 Mac:52:54:00:a2:da:2e Iaid: IPaddr:192.168.50.142 Prefix:24 Hostname:auto-473168 Clientid:01:52:54:00:a2:da:2e}
	I1129 09:30:18.613478   40531 main.go:143] libmachine: domain auto-473168 has defined IP address 192.168.50.142 and MAC address 52:54:00:a2:da:2e in network mk-auto-473168
	I1129 09:30:18.613780   40531 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1129 09:30:18.620120   40531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:30:18.637054   40531 kubeadm.go:884] updating cluster {Name:auto-473168 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:auto-473168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.142 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1129 09:30:18.637252   40531 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 09:30:18.637320   40531 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:30:18.678077   40531 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1129 09:30:18.678166   40531 ssh_runner.go:195] Run: which lz4
	I1129 09:30:18.683018   40531 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1129 09:30:18.688160   40531 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1129 09:30:18.688190   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1129 09:30:17.032097   35232 logs.go:123] Gathering logs for kube-controller-manager [b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6] ...
	I1129 09:30:17.032140   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b547efa86b8a90d7066ae2257b2e18e422eab9e93cc7422b474e90195efe4ce6"
	I1129 09:30:17.080334   35232 logs.go:123] Gathering logs for kube-apiserver [d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d] ...
	I1129 09:30:17.080376   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d48e80747efb0a7299333799c29f49cfdbd3b6bc0de0805365999efa702fd58d"
	I1129 09:30:17.124940   35232 logs.go:123] Gathering logs for coredns [5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b] ...
	I1129 09:30:17.124976   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5358d24a3979f6daa23360bc37381077cc9aa8f9e6f506987a0093dcdfef9e8b"
	I1129 09:30:17.192543   35232 logs.go:123] Gathering logs for CRI-O ...
	I1129 09:30:17.192593   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1129 09:30:17.644467   35232 logs.go:123] Gathering logs for kubelet ...
	I1129 09:30:17.644521   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1129 09:30:17.783135   35232 logs.go:123] Gathering logs for etcd [2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5] ...
	I1129 09:30:17.783179   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e69eda69d16d6fe28bf17689746be4b1f9b7649008c3f1edad511e1f78de9a5"
	I1129 09:30:17.827955   35232 logs.go:123] Gathering logs for kube-scheduler [904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b] ...
	I1129 09:30:17.827998   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 904ff6631b0ffe20748db8267b29342e7f3a3b1131114c40fbcf138120939d6b"
	I1129 09:30:17.936342   35232 logs.go:123] Gathering logs for describe nodes ...
	I1129 09:30:17.936395   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1129 09:30:18.040468   35232 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1129 09:30:18.040486   35232 logs.go:123] Gathering logs for coredns [c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a] ...
	I1129 09:30:18.040503   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9bccb5fa22196877d8e0274f12a567a9ab514f23ef65305e11753b36fff2f8a"
	I1129 09:30:18.095107   35232 logs.go:123] Gathering logs for kube-scheduler [a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126] ...
	I1129 09:30:18.095147   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a66c518ead129107ef6142b0aec845a5d683937dba9032ef936d3d56809cf126"
	I1129 09:30:18.151486   35232 logs.go:123] Gathering logs for kube-proxy [3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df] ...
	I1129 09:30:18.151528   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3668a5695fdb442731a66c68c822527cc30af65b05773a36826edb2989f038df"
	I1129 09:30:18.197722   35232 logs.go:123] Gathering logs for storage-provisioner [60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666] ...
	I1129 09:30:18.197779   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60417b01490f729f1d86437f9eeb41151b2ed8351291a7eeadd3463c343ae666"
	I1129 09:30:18.257050   35232 logs.go:123] Gathering logs for container status ...
	I1129 09:30:18.257088   35232 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1129 09:30:20.818925   35232 api_server.go:253] Checking apiserver healthz at https://192.168.72.99:8443/healthz ...
	I1129 09:30:20.819701   35232 api_server.go:269] stopped: https://192.168.72.99:8443/healthz: Get "https://192.168.72.99:8443/healthz": dial tcp 192.168.72.99:8443: connect: connection refused
	I1129 09:30:20.819776   35232 kubeadm.go:602] duration metric: took 4m18.218810899s to restartPrimaryControlPlane
	W1129 09:30:20.819857   35232 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1129 09:30:20.819917   35232 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1129 09:30:18.359578   40298 addons.go:530] duration metric: took 4.197342ms for enable addons: enabled=[]
	I1129 09:30:18.359630   40298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:30:18.605921   40298 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:30:18.646410   40298 node_ready.go:35] waiting up to 6m0s for node "pause-893760" to be "Ready" ...
	I1129 09:30:18.651110   40298 node_ready.go:49] node "pause-893760" is "Ready"
	I1129 09:30:18.651149   40298 node_ready.go:38] duration metric: took 4.696684ms for node "pause-893760" to be "Ready" ...
	I1129 09:30:18.651169   40298 api_server.go:52] waiting for apiserver process to appear ...
	I1129 09:30:18.651240   40298 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:30:18.682538   40298 api_server.go:72] duration metric: took 327.201087ms to wait for apiserver process to appear ...
	I1129 09:30:18.682561   40298 api_server.go:88] waiting for apiserver healthz status ...
	I1129 09:30:18.682583   40298 api_server.go:253] Checking apiserver healthz at https://192.168.83.104:8443/healthz ...
	I1129 09:30:18.691277   40298 api_server.go:279] https://192.168.83.104:8443/healthz returned 200:
	ok
	I1129 09:30:18.693021   40298 api_server.go:141] control plane version: v1.34.1
	I1129 09:30:18.693055   40298 api_server.go:131] duration metric: took 10.485429ms to wait for apiserver health ...
	I1129 09:30:18.693066   40298 system_pods.go:43] waiting for kube-system pods to appear ...
	I1129 09:30:18.699543   40298 system_pods.go:59] 6 kube-system pods found
	I1129 09:30:18.699582   40298 system_pods.go:61] "coredns-66bc5c9577-4bmms" [64220006-2ede-426c-bd55-8a0c72981851] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:30:18.699593   40298 system_pods.go:61] "etcd-pause-893760" [e4f015d5-b1a6-4405-b118-9db7b7341c41] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:30:18.699603   40298 system_pods.go:61] "kube-apiserver-pause-893760" [3fea2b50-f890-473d-969e-0ff61c070432] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:30:18.699613   40298 system_pods.go:61] "kube-controller-manager-pause-893760" [cdf18de5-80b4-431a-9287-71bbef4a21b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:30:18.699618   40298 system_pods.go:61] "kube-proxy-rzkwr" [8d0fdc57-ce2f-483b-82f2-006931b3ab39] Running
	I1129 09:30:18.699625   40298 system_pods.go:61] "kube-scheduler-pause-893760" [fcb17e31-c1eb-4490-9ff2-f3ad36f7b4a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:30:18.699634   40298 system_pods.go:74] duration metric: took 6.561137ms to wait for pod list to return data ...
	I1129 09:30:18.699644   40298 default_sa.go:34] waiting for default service account to be created ...
	I1129 09:30:18.704131   40298 default_sa.go:45] found service account: "default"
	I1129 09:30:18.704160   40298 default_sa.go:55] duration metric: took 4.507979ms for default service account to be created ...
	I1129 09:30:18.704174   40298 system_pods.go:116] waiting for k8s-apps to be running ...
	I1129 09:30:18.709864   40298 system_pods.go:86] 6 kube-system pods found
	I1129 09:30:18.709896   40298 system_pods.go:89] "coredns-66bc5c9577-4bmms" [64220006-2ede-426c-bd55-8a0c72981851] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1129 09:30:18.709908   40298 system_pods.go:89] "etcd-pause-893760" [e4f015d5-b1a6-4405-b118-9db7b7341c41] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1129 09:30:18.709916   40298 system_pods.go:89] "kube-apiserver-pause-893760" [3fea2b50-f890-473d-969e-0ff61c070432] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1129 09:30:18.709924   40298 system_pods.go:89] "kube-controller-manager-pause-893760" [cdf18de5-80b4-431a-9287-71bbef4a21b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1129 09:30:18.709929   40298 system_pods.go:89] "kube-proxy-rzkwr" [8d0fdc57-ce2f-483b-82f2-006931b3ab39] Running
	I1129 09:30:18.709937   40298 system_pods.go:89] "kube-scheduler-pause-893760" [fcb17e31-c1eb-4490-9ff2-f3ad36f7b4a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1129 09:30:18.709947   40298 system_pods.go:126] duration metric: took 5.765488ms to wait for k8s-apps to be running ...
	I1129 09:30:18.709957   40298 system_svc.go:44] waiting for kubelet service to be running ....
	I1129 09:30:18.710013   40298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:30:18.737914   40298 system_svc.go:56] duration metric: took 27.945973ms WaitForService to wait for kubelet
	I1129 09:30:18.737944   40298 kubeadm.go:587] duration metric: took 382.610289ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1129 09:30:18.737959   40298 node_conditions.go:102] verifying NodePressure condition ...
	I1129 09:30:18.741786   40298 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1129 09:30:18.741808   40298 node_conditions.go:123] node cpu capacity is 2
	I1129 09:30:18.741817   40298 node_conditions.go:105] duration metric: took 3.853022ms to run NodePressure ...
	I1129 09:30:18.741849   40298 start.go:242] waiting for startup goroutines ...
	I1129 09:30:18.741859   40298 start.go:247] waiting for cluster config update ...
	I1129 09:30:18.741869   40298 start.go:256] writing updated cluster config ...
	I1129 09:30:18.742144   40298 ssh_runner.go:195] Run: rm -f paused
	I1129 09:30:18.748084   40298 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:30:18.748970   40298 kapi.go:59] client config for pause-893760: &rest.Config{Host:"https://192.168.83.104:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22000-5651/.minikube/profiles/pause-893760/client.crt", KeyFile:"/home/jenkins/minikube-integration/22000-5651/.minikube/profiles/pause-893760/client.key", CAFile:"/home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2815480), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1129 09:30:18.753403   40298 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4bmms" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:19.761324   40298 pod_ready.go:94] pod "coredns-66bc5c9577-4bmms" is "Ready"
	I1129 09:30:19.761362   40298 pod_ready.go:86] duration metric: took 1.007924984s for pod "coredns-66bc5c9577-4bmms" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:19.769111   40298 pod_ready.go:83] waiting for pod "etcd-pause-893760" in "kube-system" namespace to be "Ready" or be gone ...
	W1129 09:30:21.776627   40298 pod_ready.go:104] pod "etcd-pause-893760" is not "Ready", error: <nil>
	I1129 09:30:20.283449   40531 crio.go:462] duration metric: took 1.600445201s to copy over tarball
	I1129 09:30:20.283574   40531 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1129 09:30:22.020620   40531 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.737008634s)
	I1129 09:30:22.020659   40531 crio.go:469] duration metric: took 1.737163229s to extract the tarball
	I1129 09:30:22.020670   40531 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1129 09:30:22.066503   40531 ssh_runner.go:195] Run: sudo crictl images --output json
	I1129 09:30:22.118112   40531 crio.go:514] all images are preloaded for cri-o runtime.
	I1129 09:30:22.118139   40531 cache_images.go:86] Images are preloaded, skipping loading
	I1129 09:30:22.118149   40531 kubeadm.go:935] updating node { 192.168.50.142 8443 v1.34.1 crio true true} ...
	I1129 09:30:22.118253   40531 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-473168 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.142
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-473168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1129 09:30:22.118330   40531 ssh_runner.go:195] Run: crio config
	I1129 09:30:22.170237   40531 cni.go:84] Creating CNI manager for ""
	I1129 09:30:22.170266   40531 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1129 09:30:22.170284   40531 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1129 09:30:22.170307   40531 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.142 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-473168 NodeName:auto-473168 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.142"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.142 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1129 09:30:22.170470   40531 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.142
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-473168"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.142"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.142"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1129 09:30:22.170538   40531 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1129 09:30:22.183836   40531 binaries.go:51] Found k8s binaries, skipping transfer
	I1129 09:30:22.183902   40531 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1129 09:30:22.196062   40531 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1129 09:30:22.218640   40531 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1129 09:30:22.240655   40531 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1129 09:30:22.263705   40531 ssh_runner.go:195] Run: grep 192.168.50.142	control-plane.minikube.internal$ /etc/hosts
	I1129 09:30:22.268507   40531 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.142	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1129 09:30:22.285554   40531 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1129 09:30:22.457149   40531 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1129 09:30:22.482545   40531 certs.go:69] Setting up /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168 for IP: 192.168.50.142
	I1129 09:30:22.482567   40531 certs.go:195] generating shared ca certs ...
	I1129 09:30:22.482583   40531 certs.go:227] acquiring lock for ca certs: {Name:mk263acc791d5a2c77504c81548ce554781ff9eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:22.482744   40531 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22000-5651/.minikube/ca.key
	I1129 09:30:22.482785   40531 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.key
	I1129 09:30:22.482792   40531 certs.go:257] generating profile certs ...
	I1129 09:30:22.482876   40531 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.key
	I1129 09:30:22.482890   40531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt with IP's: []
	I1129 09:30:22.645863   40531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt ...
	I1129 09:30:22.645892   40531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: {Name:mk293d20ece963a3fdd9eef1ebb9b8ff8cae849d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:22.646065   40531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.key ...
	I1129 09:30:22.646076   40531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.key: {Name:mk2d0cfd80cc68c78b1a019a43e17f4a2d89ced5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:22.646153   40531 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.key.69d217c5
	I1129 09:30:22.646168   40531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.crt.69d217c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.142]
	I1129 09:30:22.722331   40531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.crt.69d217c5 ...
	I1129 09:30:22.722360   40531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.crt.69d217c5: {Name:mkdf3d4714b22705338cbe8f7750f3230b03791b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:22.722524   40531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.key.69d217c5 ...
	I1129 09:30:22.722539   40531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.key.69d217c5: {Name:mk494412e04878075c93b21456db16692b1823af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:22.722623   40531 certs.go:382] copying /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.crt.69d217c5 -> /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.crt
	I1129 09:30:22.722704   40531 certs.go:386] copying /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.key.69d217c5 -> /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.key
	I1129 09:30:22.722757   40531 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/proxy-client.key
	I1129 09:30:22.722768   40531 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/proxy-client.crt with IP's: []
	I1129 09:30:22.832789   40531 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/proxy-client.crt ...
	I1129 09:30:22.832815   40531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/proxy-client.crt: {Name:mkffcbc42b8fa26a5b25c89183d999a2f1f5010f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:22.832977   40531 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/proxy-client.key ...
	I1129 09:30:22.832989   40531 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/proxy-client.key: {Name:mkc7c5e824a41d56bc8478b0326edc3a0a8df5f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1129 09:30:22.833159   40531 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/9613.pem (1338 bytes)
	W1129 09:30:22.833198   40531 certs.go:480] ignoring /home/jenkins/minikube-integration/22000-5651/.minikube/certs/9613_empty.pem, impossibly tiny 0 bytes
	I1129 09:30:22.833210   40531 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca-key.pem (1675 bytes)
	I1129 09:30:22.833236   40531 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/ca.pem (1082 bytes)
	I1129 09:30:22.833260   40531 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/cert.pem (1123 bytes)
	I1129 09:30:22.833287   40531 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/certs/key.pem (1679 bytes)
	I1129 09:30:22.833328   40531 certs.go:484] found cert: /home/jenkins/minikube-integration/22000-5651/.minikube/files/etc/ssl/certs/96132.pem (1708 bytes)
	I1129 09:30:22.833962   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1129 09:30:22.866077   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1129 09:30:22.895365   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1129 09:30:22.926738   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1129 09:30:22.958648   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1129 09:30:22.990548   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1129 09:30:23.021795   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1129 09:30:23.054141   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1129 09:30:23.086282   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/certs/9613.pem --> /usr/share/ca-certificates/9613.pem (1338 bytes)
	I1129 09:30:23.117060   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/files/etc/ssl/certs/96132.pem --> /usr/share/ca-certificates/96132.pem (1708 bytes)
	I1129 09:30:23.147790   40531 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1129 09:30:23.184294   40531 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1129 09:30:23.210443   40531 ssh_runner.go:195] Run: openssl version
	I1129 09:30:23.217949   40531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9613.pem && ln -fs /usr/share/ca-certificates/9613.pem /etc/ssl/certs/9613.pem"
	I1129 09:30:23.233079   40531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9613.pem
	I1129 09:30:23.238944   40531 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 29 08:36 /usr/share/ca-certificates/9613.pem
	I1129 09:30:23.239025   40531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9613.pem
	I1129 09:30:23.248878   40531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9613.pem /etc/ssl/certs/51391683.0"
	I1129 09:30:23.263279   40531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96132.pem && ln -fs /usr/share/ca-certificates/96132.pem /etc/ssl/certs/96132.pem"
	I1129 09:30:23.277570   40531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96132.pem
	I1129 09:30:23.283272   40531 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 29 08:36 /usr/share/ca-certificates/96132.pem
	I1129 09:30:23.283342   40531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96132.pem
	I1129 09:30:23.291006   40531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96132.pem /etc/ssl/certs/3ec20f2e.0"
	I1129 09:30:23.305541   40531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1129 09:30:23.319864   40531 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:30:23.325566   40531 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 29 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:30:23.325640   40531 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1129 09:30:23.332857   40531 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1129 09:30:23.347332   40531 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1129 09:30:23.352681   40531 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1129 09:30:23.352755   40531 kubeadm.go:401] StartCluster: {Name:auto-473168 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:auto-473168 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.142 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 09:30:23.352856   40531 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1129 09:30:23.352923   40531 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1129 09:30:23.396879   40531 cri.go:89] found id: ""
	I1129 09:30:23.396957   40531 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1129 09:30:23.411255   40531 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1129 09:30:23.424662   40531 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1129 09:30:23.440395   40531 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1129 09:30:23.440422   40531 kubeadm.go:158] found existing configuration files:
	
	I1129 09:30:23.440491   40531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1129 09:30:23.452770   40531 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1129 09:30:23.452858   40531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1129 09:30:23.465879   40531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1129 09:30:23.477155   40531 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1129 09:30:23.477240   40531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1129 09:30:23.489617   40531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1129 09:30:23.501654   40531 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1129 09:30:23.501715   40531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1129 09:30:23.514333   40531 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1129 09:30:23.526636   40531 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1129 09:30:23.526697   40531 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1129 09:30:23.539001   40531 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1129 09:30:23.595012   40531 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1129 09:30:23.595086   40531 kubeadm.go:319] [preflight] Running pre-flight checks
	I1129 09:30:23.708384   40531 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1129 09:30:23.708566   40531 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1129 09:30:23.708735   40531 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1129 09:30:23.721738   40531 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1129 09:30:23.874886   40531 out.go:252]   - Generating certificates and keys ...
	I1129 09:30:23.875035   40531 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1129 09:30:23.875149   40531 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1129 09:30:23.875267   40531 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	W1129 09:30:24.145339   40298 pod_ready.go:104] pod "etcd-pause-893760" is not "Ready", error: <nil>
	I1129 09:30:25.775394   40298 pod_ready.go:94] pod "etcd-pause-893760" is "Ready"
	I1129 09:30:25.775427   40298 pod_ready.go:86] duration metric: took 6.006288256s for pod "etcd-pause-893760" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:25.778252   40298 pod_ready.go:83] waiting for pod "kube-apiserver-pause-893760" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:25.782343   40298 pod_ready.go:94] pod "kube-apiserver-pause-893760" is "Ready"
	I1129 09:30:25.782368   40298 pod_ready.go:86] duration metric: took 4.09282ms for pod "kube-apiserver-pause-893760" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:25.785216   40298 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-893760" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:25.789119   40298 pod_ready.go:94] pod "kube-controller-manager-pause-893760" is "Ready"
	I1129 09:30:25.789142   40298 pod_ready.go:86] duration metric: took 3.903593ms for pod "kube-controller-manager-pause-893760" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:25.791277   40298 pod_ready.go:83] waiting for pod "kube-proxy-rzkwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:26.169466   40298 pod_ready.go:94] pod "kube-proxy-rzkwr" is "Ready"
	I1129 09:30:26.169490   40298 pod_ready.go:86] duration metric: took 378.196693ms for pod "kube-proxy-rzkwr" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:26.370321   40298 pod_ready.go:83] waiting for pod "kube-scheduler-pause-893760" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:26.769472   40298 pod_ready.go:94] pod "kube-scheduler-pause-893760" is "Ready"
	I1129 09:30:26.769508   40298 pod_ready.go:86] duration metric: took 399.151054ms for pod "kube-scheduler-pause-893760" in "kube-system" namespace to be "Ready" or be gone ...
	I1129 09:30:26.769526   40298 pod_ready.go:40] duration metric: took 8.02140096s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1129 09:30:26.817716   40298 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1129 09:30:26.821978   40298 out.go:179] * Done! kubectl is now configured to use "pause-893760" cluster and "default" namespace by default
	I1129 09:30:24.317560   40531 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1129 09:30:24.659556   40531 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1129 09:30:24.901895   40531 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1129 09:30:25.729138   40531 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1129 09:30:25.729346   40531 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-473168 localhost] and IPs [192.168.50.142 127.0.0.1 ::1]
	I1129 09:30:25.932743   40531 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1129 09:30:25.932905   40531 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-473168 localhost] and IPs [192.168.50.142 127.0.0.1 ::1]
	I1129 09:30:26.201044   40531 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1129 09:30:26.871908   40531 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1129 09:30:26.997293   40531 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1129 09:30:26.997577   40531 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1129 09:30:27.224975   40531 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1129 09:30:27.513397   40531 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1129 09:30:27.754583   40531 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1129 09:30:27.792065   40531 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1129 09:30:28.234051   40531 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1129 09:30:28.234206   40531 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1129 09:30:28.236789   40531 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1129 09:30:28.239077   40531 out.go:252]   - Booting up control plane ...
	I1129 09:30:28.239217   40531 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1129 09:30:28.239341   40531 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1129 09:30:28.239702   40531 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1129 09:30:28.264034   40531 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1129 09:30:28.264186   40531 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1129 09:30:28.270791   40531 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1129 09:30:28.271043   40531 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1129 09:30:28.271106   40531 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1129 09:30:28.458568   40531 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1129 09:30:28.459253   40531 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.576478986Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d22e8755-e20d-4c5c-b69e-0cb3b36a82e0 name=/runtime.v1.RuntimeService/Version
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.577506473Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9fac973d-3108-4549-b854-d972599e046c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.577924527Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764408629577901809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fac973d-3108-4549-b854-d972599e046c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.578597363Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6535a0ce-ce79-4ad3-8501-b4b039348209 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.578667089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6535a0ce-ce79-4ad3-8501-b4b039348209 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.578927526Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eb4aed02a347d4f806f74d29f691b160f1752223360e1f4993891bc19937acc9,PodSandboxId:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25bd420f922f63899981a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764408617503397540,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0fdc57-ce2f-483b-82f2-006931b3ab39,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4805659e2d2350aa4b28a3f0a7e9befcdf9d1ce5c46b8a7418eacb37b589daf1,PodSandboxId:fab3926d67f6b2c76c5d114314c72b25a18f547391edfea90b81aa5abd13a417,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764408617512228376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a25a5d8939a18228694eb456392302e9c83463a0275b2753d434deae57f1ee,PodSandboxId:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764408613528235700,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebedaca83ba826a1dbb5a46ab2511030acc3b00245a2abecd907793732b610d2,PodSandboxId:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388
d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764408613501070510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75538cef284310fb254cadee824a4f44de67872163cbf4f332932a451a0b7db7,PodSandboxId:d43d12644b34ecad64ea2f2e8e8879d632abcb58ab983fb1a867bf05a693a240,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764408591221150082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae36582cbc8207da290442472aef7150dd5654da51d2c6bfb156077457c3420e,PodSandboxId:064a34577f14c0558cbe035415c72f0df3d0bd361760c3cc3e7f4548cd8790fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764408591174327569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c8f96dff7d0d88bc3f9e905b659365005dcc3c0ab3a617d5aa75138ca581fd,PodSandboxId:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25b
d420f922f63899981a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_CREATED,CreatedAt:1764408591132570485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0fdc57-ce2f-483b-82f2-006931b3ab39,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77714cab099fbe439b9f36eb17008bc4c718f563945fac16204b748c134957c3,PodSandboxId:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&Cont
ainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1764408591082349561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49386cd4b239787192e49261e28712a3706738c55e7526c54f9bc6f
46fe925b4,PodSandboxId:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1764408591039983083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ef4430e1d2bb8f6efd3aaff4706e7b741d6c4ede2877fa5847dff6b81a716e,PodSandboxId:990af6dc1b865ea31e52bd3b596be9612c1f140ab83c4c2bf9799ccbd542780f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1764408544729560132,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178b3ab1cb251b3a9f7c21cd176343ca8ae0a3af11799761ee56e2de3cedd41b,PodSandboxId:2e12473db9fcd13ed241426d8e2e1e024ca83e026fcef11cde19629fc98fed8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1764408531835129253,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893
760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7555626b5cb53f89c622444b7a65f0d4e5204daa98e629811921ef3bd8259c26,PodSandboxId:607b6ddd8dc665eb03849c32673ba6bfa5f3cf6b26ba656fb823186d5ef39b40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1764408531785179883,Labels:map[string]string
{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6535a0ce-ce79-4ad3-8501-b4b039348209 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.615744897Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31dff738-941e-4574-bf6d-b4c92f3033e6 name=/runtime.v1.RuntimeService/Version
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.615853804Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31dff738-941e-4574-bf6d-b4c92f3033e6 name=/runtime.v1.RuntimeService/Version
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.617369255Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9636a7ee-56d4-49d7-9611-f283c04046a0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.617774093Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764408629617752121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9636a7ee-56d4-49d7-9611-f283c04046a0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.619015775Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3e8071d-053e-43b1-9df6-35fbc3499b90 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.619074621Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3e8071d-053e-43b1-9df6-35fbc3499b90 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.619351320Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eb4aed02a347d4f806f74d29f691b160f1752223360e1f4993891bc19937acc9,PodSandboxId:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25bd420f922f63899981a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764408617503397540,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0fdc57-ce2f-483b-82f2-006931b3ab39,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4805659e2d2350aa4b28a3f0a7e9befcdf9d1ce5c46b8a7418eacb37b589daf1,PodSandboxId:fab3926d67f6b2c76c5d114314c72b25a18f547391edfea90b81aa5abd13a417,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764408617512228376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a25a5d8939a18228694eb456392302e9c83463a0275b2753d434deae57f1ee,PodSandboxId:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764408613528235700,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebedaca83ba826a1dbb5a46ab2511030acc3b00245a2abecd907793732b610d2,PodSandboxId:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388
d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764408613501070510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75538cef284310fb254cadee824a4f44de67872163cbf4f332932a451a0b7db7,PodSandboxId:d43d12644b34ecad64ea2f2e8e8879d632abcb58ab983fb1a867bf05a693a240,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764408591221150082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae36582cbc8207da290442472aef7150dd5654da51d2c6bfb156077457c3420e,PodSandboxId:064a34577f14c0558cbe035415c72f0df3d0bd361760c3cc3e7f4548cd8790fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764408591174327569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c8f96dff7d0d88bc3f9e905b659365005dcc3c0ab3a617d5aa75138ca581fd,PodSandboxId:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25b
d420f922f63899981a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_CREATED,CreatedAt:1764408591132570485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0fdc57-ce2f-483b-82f2-006931b3ab39,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77714cab099fbe439b9f36eb17008bc4c718f563945fac16204b748c134957c3,PodSandboxId:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&Cont
ainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1764408591082349561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49386cd4b239787192e49261e28712a3706738c55e7526c54f9bc6f
46fe925b4,PodSandboxId:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1764408591039983083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ef4430e1d2bb8f6efd3aaff4706e7b741d6c4ede2877fa5847dff6b81a716e,PodSandboxId:990af6dc1b865ea31e52bd3b596be9612c1f140ab83c4c2bf9799ccbd542780f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1764408544729560132,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178b3ab1cb251b3a9f7c21cd176343ca8ae0a3af11799761ee56e2de3cedd41b,PodSandboxId:2e12473db9fcd13ed241426d8e2e1e024ca83e026fcef11cde19629fc98fed8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1764408531835129253,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893
760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7555626b5cb53f89c622444b7a65f0d4e5204daa98e629811921ef3bd8259c26,PodSandboxId:607b6ddd8dc665eb03849c32673ba6bfa5f3cf6b26ba656fb823186d5ef39b40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1764408531785179883,Labels:map[string]string
{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3e8071d-053e-43b1-9df6-35fbc3499b90 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.654477800Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=5dee11b0-f9cf-4740-8c47-3c35ed8b5073 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.654748359Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:fab3926d67f6b2c76c5d114314c72b25a18f547391edfea90b81aa5abd13a417,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-4bmms,Uid:64220006-2ede-426c-bd55-8a0c72981851,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1764408590898568122,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-29T09:29:03.709585255Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d43d12644b34ecad64ea2f2e8e8879d632abcb58ab983fb1a867bf05a693a240,Metadata:&PodSandboxMetadata{Name:etcd-pause-893760,Uid:bc02c2dd86763f8a7654c214d1aca4ab,Namespace:kube-system,Attempt:1,
},State:SANDBOX_READY,CreatedAt:1764408590649563183,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.104:2379,kubernetes.io/config.hash: bc02c2dd86763f8a7654c214d1aca4ab,kubernetes.io/config.seen: 2025-11-29T09:28:58.168459257Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25bd420f922f63899981a,Metadata:&PodSandboxMetadata{Name:kube-proxy-rzkwr,Uid:8d0fdc57-ce2f-483b-82f2-006931b3ab39,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1764408590635829787,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
8d0fdc57-ce2f-483b-82f2-006931b3ab39,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-29T09:29:03.401241389Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-893760,Uid:d892aedcec9d261d3ce63d1f2447563a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1764408590616365934,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d892aedcec9d261d3ce63d1f2447563a,kubernetes.io/config.seen: 2025-11-29T09:28:58.168463977Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:064a34577f14c0558cbe035415c72f0df
3d0bd361760c3cc3e7f4548cd8790fa,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-893760,Uid:2bd7c40ab743b39365a90b8ce5ed742b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1764408590602115173,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2bd7c40ab743b39365a90b8ce5ed742b,kubernetes.io/config.seen: 2025-11-29T09:28:58.168464739Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-893760,Uid:c2bd77e32b976ddeeaa2821ad1581a49,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1764408590596665504,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.
kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.104:8443,kubernetes.io/config.hash: c2bd77e32b976ddeeaa2821ad1581a49,kubernetes.io/config.seen: 2025-11-29T09:28:58.168462711Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d4e719b339e025b106760bb57babb3db75593e5b0c574d56a2ffc000130f867a,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-2csdv,Uid:3eafa4ea-e1d3-4729-9d3e-bbe4126f722a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1764408544158732741,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-2csdv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3eafa4ea-e1d3-4729-9d3e-bbe4126f722a,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io
/config.seen: 2025-11-29T09:29:03.767155951Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:990af6dc1b865ea31e52bd3b596be9612c1f140ab83c4c2bf9799ccbd542780f,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-4bmms,Uid:64220006-2ede-426c-bd55-8a0c72981851,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1764408544070846503,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-29T09:29:03.709585255Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0a76eecf781827f45ca892334890ecaffa24687e4dc8dc485a5a3d4f5384668e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-893760,Uid:c2bd77e32b976ddeeaa2821ad1581a49,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1764408531589
016209,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.104:8443,kubernetes.io/config.hash: c2bd77e32b976ddeeaa2821ad1581a49,kubernetes.io/config.seen: 2025-11-29T09:28:51.005689421Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2e12473db9fcd13ed241426d8e2e1e024ca83e026fcef11cde19629fc98fed8f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-893760,Uid:2bd7c40ab743b39365a90b8ce5ed742b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1764408531581977060,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40
ab743b39365a90b8ce5ed742b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2bd7c40ab743b39365a90b8ce5ed742b,kubernetes.io/config.seen: 2025-11-29T09:28:51.005691426Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:844655aa22d7230d668dcf8a3f479e78fa51d72ee680126341a834b774ca19ca,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-893760,Uid:d892aedcec9d261d3ce63d1f2447563a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1764408531575631980,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d892aedcec9d261d3ce63d1f2447563a,kubernetes.io/config.seen: 2025-11-29T09:28:51.005690609Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:607b6ddd
8dc665eb03849c32673ba6bfa5f3cf6b26ba656fb823186d5ef39b40,Metadata:&PodSandboxMetadata{Name:etcd-pause-893760,Uid:bc02c2dd86763f8a7654c214d1aca4ab,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1764408531562181668,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.104:2379,kubernetes.io/config.hash: bc02c2dd86763f8a7654c214d1aca4ab,kubernetes.io/config.seen: 2025-11-29T09:28:51.005685346Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5dee11b0-f9cf-4740-8c47-3c35ed8b5073 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.655737900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=366aecd6-0192-41fa-8446-23dc862d7b4b name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.655798021Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=366aecd6-0192-41fa-8446-23dc862d7b4b name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.656086475Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eb4aed02a347d4f806f74d29f691b160f1752223360e1f4993891bc19937acc9,PodSandboxId:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25bd420f922f63899981a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764408617503397540,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0fdc57-ce2f-483b-82f2-006931b3ab39,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4805659e2d2350aa4b28a3f0a7e9befcdf9d1ce5c46b8a7418eacb37b589daf1,PodSandboxId:fab3926d67f6b2c76c5d114314c72b25a18f547391edfea90b81aa5abd13a417,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764408617512228376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a25a5d8939a18228694eb456392302e9c83463a0275b2753d434deae57f1ee,PodSandboxId:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764408613528235700,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebedaca83ba826a1dbb5a46ab2511030acc3b00245a2abecd907793732b610d2,PodSandboxId:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388
d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764408613501070510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75538cef284310fb254cadee824a4f44de67872163cbf4f332932a451a0b7db7,PodSandboxId:d43d12644b34ecad64ea2f2e8e8879d632abcb58ab983fb1a867bf05a693a240,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764408591221150082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae36582cbc8207da290442472aef7150dd5654da51d2c6bfb156077457c3420e,PodSandboxId:064a34577f14c0558cbe035415c72f0df3d0bd361760c3cc3e7f4548cd8790fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764408591174327569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c8f96dff7d0d88bc3f9e905b659365005dcc3c0ab3a617d5aa75138ca581fd,PodSandboxId:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25b
d420f922f63899981a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_CREATED,CreatedAt:1764408591132570485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0fdc57-ce2f-483b-82f2-006931b3ab39,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77714cab099fbe439b9f36eb17008bc4c718f563945fac16204b748c134957c3,PodSandboxId:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&Cont
ainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1764408591082349561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49386cd4b239787192e49261e28712a3706738c55e7526c54f9bc6f
46fe925b4,PodSandboxId:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1764408591039983083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ef4430e1d2bb8f6efd3aaff4706e7b741d6c4ede2877fa5847dff6b81a716e,PodSandboxId:990af6dc1b865ea31e52bd3b596be9612c1f140ab83c4c2bf9799ccbd542780f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1764408544729560132,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178b3ab1cb251b3a9f7c21cd176343ca8ae0a3af11799761ee56e2de3cedd41b,PodSandboxId:2e12473db9fcd13ed241426d8e2e1e024ca83e026fcef11cde19629fc98fed8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1764408531835129253,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893
760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7555626b5cb53f89c622444b7a65f0d4e5204daa98e629811921ef3bd8259c26,PodSandboxId:607b6ddd8dc665eb03849c32673ba6bfa5f3cf6b26ba656fb823186d5ef39b40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1764408531785179883,Labels:map[string]string
{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=366aecd6-0192-41fa-8446-23dc862d7b4b name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.666346650Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ae513142-d314-4cb4-99cf-58c161e78a0d name=/runtime.v1.RuntimeService/Version
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.666440520Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae513142-d314-4cb4-99cf-58c161e78a0d name=/runtime.v1.RuntimeService/Version
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.667694127Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9818af7-2a67-465d-bd08-c3bc856b8802 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.668721609Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1764408629668667228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9818af7-2a67-465d-bd08-c3bc856b8802 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.669958127Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0560622-6b38-46d6-bd89-4b50ab68dbc3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.670207008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0560622-6b38-46d6-bd89-4b50ab68dbc3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 29 09:30:29 pause-893760 crio[2792]: time="2025-11-29 09:30:29.670953862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:eb4aed02a347d4f806f74d29f691b160f1752223360e1f4993891bc19937acc9,PodSandboxId:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25bd420f922f63899981a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1764408617503397540,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0fdc57-ce2f-483b-82f2-006931b3ab39,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4805659e2d2350aa4b28a3f0a7e9befcdf9d1ce5c46b8a7418eacb37b589daf1,PodSandboxId:fab3926d67f6b2c76c5d114314c72b25a18f547391edfea90b81aa5abd13a417,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1764408617512228376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99a25a5d8939a18228694eb456392302e9c83463a0275b2753d434deae57f1ee,PodSandboxId:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1764408613528235700,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ebedaca83ba826a1dbb5a46ab2511030acc3b00245a2abecd907793732b610d2,PodSandboxId:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388
d0619538f,State:CONTAINER_RUNNING,CreatedAt:1764408613501070510,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75538cef284310fb254cadee824a4f44de67872163cbf4f332932a451a0b7db7,PodSandboxId:d43d12644b34ecad64ea2f2e8e8879d632abcb58ab983fb1a867bf05a693a240,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1764408591221150082,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae36582cbc8207da290442472aef7150dd5654da51d2c6bfb156077457c3420e,PodSandboxId:064a34577f14c0558cbe035415c72f0df3d0bd361760c3cc3e7f4548cd8790fa,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1764408591174327569,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d8c8f96dff7d0d88bc3f9e905b659365005dcc3c0ab3a617d5aa75138ca581fd,PodSandboxId:0d8ee0e9045738b8a99b81ad7857eef51301dd6ff4e25b
d420f922f63899981a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_CREATED,CreatedAt:1764408591132570485,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rzkwr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d0fdc57-ce2f-483b-82f2-006931b3ab39,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77714cab099fbe439b9f36eb17008bc4c718f563945fac16204b748c134957c3,PodSandboxId:acd5b0858b31dfcc849493fbba20ef4b64d22d30eb16b79478f52c8838f1f98a,Metadata:&Cont
ainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1764408591082349561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2bd77e32b976ddeeaa2821ad1581a49,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49386cd4b239787192e49261e28712a3706738c55e7526c54f9bc6f
46fe925b4,PodSandboxId:fb1043e8abc7adaee6b80b3719fb509dae494263df24a1bfb0f5b112fd52a084,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1764408591039983083,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d892aedcec9d261d3ce63d1f2447563a,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessage
Policy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6ef4430e1d2bb8f6efd3aaff4706e7b741d6c4ede2877fa5847dff6b81a716e,PodSandboxId:990af6dc1b865ea31e52bd3b596be9612c1f140ab83c4c2bf9799ccbd542780f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1764408544729560132,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4bmms,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64220006-2ede-426c-bd55-8a0c72981851,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"co
ntainerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:178b3ab1cb251b3a9f7c21cd176343ca8ae0a3af11799761ee56e2de3cedd41b,PodSandboxId:2e12473db9fcd13ed241426d8e2e1e024ca83e026fcef11cde19629fc98fed8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1764408531835129253,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-893
760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bd7c40ab743b39365a90b8ce5ed742b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7555626b5cb53f89c622444b7a65f0d4e5204daa98e629811921ef3bd8259c26,PodSandboxId:607b6ddd8dc665eb03849c32673ba6bfa5f3cf6b26ba656fb823186d5ef39b40,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1764408531785179883,Labels:map[string]string
{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-893760,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc02c2dd86763f8a7654c214d1aca4ab,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c0560622-6b38-46d6-bd89-4b50ab68dbc3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	4805659e2d235       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago       Running             coredns                   1                   fab3926d67f6b       coredns-66bc5c9577-4bmms               kube-system
	eb4aed02a347d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   12 seconds ago       Running             kube-proxy                2                   0d8ee0e904573       kube-proxy-rzkwr                       kube-system
	99a25a5d8939a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   16 seconds ago       Running             kube-apiserver            2                   acd5b0858b31d       kube-apiserver-pause-893760            kube-system
	ebedaca83ba82       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   16 seconds ago       Running             kube-controller-manager   2                   fb1043e8abc7a       kube-controller-manager-pause-893760   kube-system
	75538cef28431       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   38 seconds ago       Running             etcd                      1                   d43d12644b34e       etcd-pause-893760                      kube-system
	ae36582cbc820       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   38 seconds ago       Running             kube-scheduler            1                   064a34577f14c       kube-scheduler-pause-893760            kube-system
	d8c8f96dff7d0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   38 seconds ago       Created             kube-proxy                1                   0d8ee0e904573       kube-proxy-rzkwr                       kube-system
	77714cab099fb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   38 seconds ago       Exited              kube-apiserver            1                   acd5b0858b31d       kube-apiserver-pause-893760            kube-system
	49386cd4b2397       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   38 seconds ago       Exited              kube-controller-manager   1                   fb1043e8abc7a       kube-controller-manager-pause-893760   kube-system
	c6ef4430e1d2b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   990af6dc1b865       coredns-66bc5c9577-4bmms               kube-system
	178b3ab1cb251       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Exited              kube-scheduler            0                   2e12473db9fcd       kube-scheduler-pause-893760            kube-system
	7555626b5cb53       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Exited              etcd                      0                   607b6ddd8dc66       etcd-pause-893760                      kube-system
	
	
	==> coredns [4805659e2d2350aa4b28a3f0a7e9befcdf9d1ce5c46b8a7418eacb37b589daf1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46055 - 13616 "HINFO IN 7613232645828771212.3063199101481583223. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.026159738s
	
	
	==> coredns [c6ef4430e1d2bb8f6efd3aaff4706e7b741d6c4ede2877fa5847dff6b81a716e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-893760
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-893760
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0eb20ec824c82ab3f24099c8b785e0a2a5789af
	                    minikube.k8s.io/name=pause-893760
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_29T09_28_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 29 Nov 2025 09:28:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-893760
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 29 Nov 2025 09:30:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 29 Nov 2025 09:30:16 +0000   Sat, 29 Nov 2025 09:28:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 29 Nov 2025 09:30:16 +0000   Sat, 29 Nov 2025 09:28:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 29 Nov 2025 09:30:16 +0000   Sat, 29 Nov 2025 09:28:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 29 Nov 2025 09:30:16 +0000   Sat, 29 Nov 2025 09:28:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.104
	  Hostname:    pause-893760
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c767e78f6e34960a0105830388bba46
	  System UUID:                3c767e78-f6e3-4960-a010-5830388bba46
	  Boot ID:                    2efdb47d-abc8-4960-9699-39eef6f06aa6
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4bmms                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     86s
	  kube-system                 etcd-pause-893760                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         93s
	  kube-system                 kube-apiserver-pause-893760             250m (12%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-pause-893760    200m (10%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-rzkwr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-scheduler-pause-893760             100m (5%)     0 (0%)      0 (0%)           0 (0%)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 84s                kube-proxy       
	  Normal  Starting                 12s                kube-proxy       
	  Normal  NodeHasSufficientPID     91s                kubelet          Node pause-893760 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  91s                kubelet          Node pause-893760 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s                kubelet          Node pause-893760 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 91s                kubelet          Starting kubelet.
	  Normal  NodeReady                90s                kubelet          Node pause-893760 status is now: NodeReady
	  Normal  RegisteredNode           87s                node-controller  Node pause-893760 event: Registered Node pause-893760 in Controller
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node pause-893760 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node pause-893760 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x7 over 36s)  kubelet          Node pause-893760 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  36s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                node-controller  Node pause-893760 event: Registered Node pause-893760 in Controller
	
	
	==> dmesg <==
	[Nov29 09:28] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001360] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005665] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.193438] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.113787] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.122032] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.112292] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.139725] kauditd_printk_skb: 171 callbacks suppressed
	[Nov29 09:29] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.823808] kauditd_printk_skb: 219 callbacks suppressed
	[ +21.302347] kauditd_printk_skb: 38 callbacks suppressed
	[  +0.174518] kauditd_printk_skb: 304 callbacks suppressed
	[Nov29 09:30] kauditd_printk_skb: 14 callbacks suppressed
	[  +4.007004] kauditd_printk_skb: 22 callbacks suppressed
	
	
	==> etcd [75538cef284310fb254cadee824a4f44de67872163cbf4f332932a451a0b7db7] <==
	{"level":"warn","ts":"2025-11-29T09:30:24.132536Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"397.83603ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6405696258632730950 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-893760\" mod_revision:416 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-893760\" value_size:6749 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-893760\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-29T09:30:24.132594Z","caller":"traceutil/trace.go:172","msg":"trace[585642599] linearizableReadLoop","detail":"{readStateIndex:524; appliedIndex:523; }","duration":"365.15373ms","start":"2025-11-29T09:30:23.767432Z","end":"2025-11-29T09:30:24.132586Z","steps":["trace[585642599] 'read index received'  (duration: 73.203µs)","trace[585642599] 'applied index is now lower than readState.Index'  (duration: 365.079698ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-29T09:30:24.132877Z","caller":"traceutil/trace.go:172","msg":"trace[107119085] transaction","detail":"{read_only:false; response_revision:482; number_of_response:1; }","duration":"740.239157ms","start":"2025-11-29T09:30:23.392624Z","end":"2025-11-29T09:30:24.132863Z","steps":["trace[107119085] 'process raft request'  (duration: 341.476448ms)","trace[107119085] 'compare'  (duration: 397.265626ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:30:24.132976Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-29T09:30:23.392604Z","time spent":"740.335843ms","remote":"127.0.0.1:40620","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6820,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-893760\" mod_revision:416 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-893760\" value_size:6749 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-893760\" > >"}
	{"level":"warn","ts":"2025-11-29T09:30:24.133181Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"365.758937ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-893760\" limit:1 ","response":"range_response_count:1 size:6082"}
	{"level":"info","ts":"2025-11-29T09:30:24.133208Z","caller":"traceutil/trace.go:172","msg":"trace[120927304] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-893760; range_end:; response_count:1; response_revision:482; }","duration":"365.786141ms","start":"2025-11-29T09:30:23.767414Z","end":"2025-11-29T09:30:24.133201Z","steps":["trace[120927304] 'agreement among raft nodes before linearized reading'  (duration: 365.692662ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:30:24.133227Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-29T09:30:23.767397Z","time spent":"365.824791ms","remote":"127.0.0.1:40620","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":6104,"request content":"key:\"/registry/pods/kube-system/etcd-pause-893760\" limit:1 "}
	{"level":"info","ts":"2025-11-29T09:30:24.426362Z","caller":"traceutil/trace.go:172","msg":"trace[312180602] linearizableReadLoop","detail":"{readStateIndex:524; appliedIndex:524; }","duration":"158.558907ms","start":"2025-11-29T09:30:24.267785Z","end":"2025-11-29T09:30:24.426344Z","steps":["trace[312180602] 'read index received'  (duration: 158.55269ms)","trace[312180602] 'applied index is now lower than readState.Index'  (duration: 5.359µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:30:24.959195Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"340.382926ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-29T09:30:24.959243Z","caller":"traceutil/trace.go:172","msg":"trace[1327476127] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:482; }","duration":"340.441519ms","start":"2025-11-29T09:30:24.618793Z","end":"2025-11-29T09:30:24.959234Z","steps":["trace[1327476127] 'range keys from in-memory index tree'  (duration: 340.355347ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:30:24.959408Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"691.625373ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-893760\" limit:1 ","response":"range_response_count:1 size:6082"}
	{"level":"info","ts":"2025-11-29T09:30:24.959443Z","caller":"traceutil/trace.go:172","msg":"trace[1966405968] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-893760; range_end:; response_count:1; response_revision:482; }","duration":"691.678111ms","start":"2025-11-29T09:30:24.267756Z","end":"2025-11-29T09:30:24.959434Z","steps":["trace[1966405968] 'agreement among raft nodes before linearized reading'  (duration: 158.684693ms)","trace[1966405968] 'range keys from in-memory index tree'  (duration: 532.879615ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:30:24.959464Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-29T09:30:24.267735Z","time spent":"691.723669ms","remote":"127.0.0.1:40620","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":6104,"request content":"key:\"/registry/pods/kube-system/etcd-pause-893760\" limit:1 "}
	{"level":"warn","ts":"2025-11-29T09:30:24.959466Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"533.046668ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6405696258632730956 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-893760\" mod_revision:417 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-893760\" value_size:4969 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-893760\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-29T09:30:24.959500Z","caller":"traceutil/trace.go:172","msg":"trace[1101987858] linearizableReadLoop","detail":"{readStateIndex:525; appliedIndex:524; }","duration":"333.665387ms","start":"2025-11-29T09:30:24.625829Z","end":"2025-11-29T09:30:24.959494Z","steps":["trace[1101987858] 'read index received'  (duration: 25.678µs)","trace[1101987858] 'applied index is now lower than readState.Index'  (duration: 333.639267ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:30:24.959664Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"333.834843ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-29T09:30:24.959681Z","caller":"traceutil/trace.go:172","msg":"trace[1440632774] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:483; }","duration":"333.852272ms","start":"2025-11-29T09:30:24.625824Z","end":"2025-11-29T09:30:24.959677Z","steps":["trace[1440632774] 'agreement among raft nodes before linearized reading'  (duration: 333.807987ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:30:24.959694Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-29T09:30:24.625810Z","time spent":"333.881665ms","remote":"127.0.0.1:40248","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2025-11-29T09:30:24.959772Z","caller":"traceutil/trace.go:172","msg":"trace[1550989086] transaction","detail":"{read_only:false; response_revision:483; number_of_response:1; }","duration":"813.391945ms","start":"2025-11-29T09:30:24.146374Z","end":"2025-11-29T09:30:24.959766Z","steps":["trace[1550989086] 'process raft request'  (duration: 280.005054ms)","trace[1550989086] 'compare'  (duration: 532.736135ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:30:24.959811Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-29T09:30:24.146356Z","time spent":"813.42817ms","remote":"127.0.0.1:40620","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5031,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-893760\" mod_revision:417 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-893760\" value_size:4969 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-893760\" > >"}
	{"level":"info","ts":"2025-11-29T09:30:25.409127Z","caller":"traceutil/trace.go:172","msg":"trace[702269983] linearizableReadLoop","detail":"{readStateIndex:525; appliedIndex:525; }","duration":"141.637393ms","start":"2025-11-29T09:30:25.267467Z","end":"2025-11-29T09:30:25.409104Z","steps":["trace[702269983] 'read index received'  (duration: 141.625958ms)","trace[702269983] 'applied index is now lower than readState.Index'  (duration: 5.245µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-29T09:30:25.417730Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.250756ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-893760\" limit:1 ","response":"range_response_count:1 size:6082"}
	{"level":"info","ts":"2025-11-29T09:30:25.417788Z","caller":"traceutil/trace.go:172","msg":"trace[1064063686] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-893760; range_end:; response_count:1; response_revision:483; }","duration":"150.316242ms","start":"2025-11-29T09:30:25.267462Z","end":"2025-11-29T09:30:25.417778Z","steps":["trace[1064063686] 'agreement among raft nodes before linearized reading'  (duration: 141.880451ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-29T09:30:25.417857Z","caller":"traceutil/trace.go:172","msg":"trace[862626125] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"446.741238ms","start":"2025-11-29T09:30:24.971103Z","end":"2025-11-29T09:30:25.417844Z","steps":["trace[862626125] 'process raft request'  (duration: 438.362822ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-29T09:30:25.417961Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-29T09:30:24.971091Z","time spent":"446.793691ms","remote":"127.0.0.1:40620","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7227,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-893760\" mod_revision:414 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-893760\" value_size:7165 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-893760\" > >"}
	
	
	==> etcd [7555626b5cb53f89c622444b7a65f0d4e5204daa98e629811921ef3bd8259c26] <==
	{"level":"warn","ts":"2025-11-29T09:28:54.496392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:28:54.507372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:28:54.519577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:28:54.535061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:28:54.546669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:28:54.558000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-29T09:28:54.652119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46168","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-29T09:29:41.651238Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-29T09:29:41.652070Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-893760","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.104:2380"],"advertise-client-urls":["https://192.168.83.104:2379"]}
	{"level":"error","ts":"2025-11-29T09:29:41.652374Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-29T09:29:41.736604Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-29T09:29:41.736667Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T09:29:41.736688Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"2a0c3b01d1d858e5","current-leader-member-id":"2a0c3b01d1d858e5"}
	{"level":"info","ts":"2025-11-29T09:29:41.736730Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-29T09:29:41.736807Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-29T09:29:41.736798Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-29T09:29:41.736847Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-29T09:29:41.736854Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-29T09:29:41.736895Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.104:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-29T09:29:41.736902Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.104:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-29T09:29:41.736908Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.104:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T09:29:41.740909Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.83.104:2380"}
	{"level":"error","ts":"2025-11-29T09:29:41.741021Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.104:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-29T09:29:41.741066Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.83.104:2380"}
	{"level":"info","ts":"2025-11-29T09:29:41.741080Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-893760","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.104:2380"],"advertise-client-urls":["https://192.168.83.104:2379"]}
	
	
	==> kernel <==
	 09:30:30 up 2 min,  0 users,  load average: 1.24, 0.43, 0.16
	Linux pause-893760 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Nov 19 01:10:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [77714cab099fbe439b9f36eb17008bc4c718f563945fac16204b748c134957c3] <==
	W1129 09:29:52.160388       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:52.160545       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1129 09:29:52.164329       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1129 09:29:52.178482       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1129 09:29:52.189570       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1129 09:29:52.191403       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1129 09:29:52.191649       1 instance.go:239] Using reconciler: lease
	W1129 09:29:52.193082       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:52.193247       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:53.161844       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:53.161859       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:53.194552       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:54.564684       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:54.759118       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:54.975985       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:56.883590       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:57.114998       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:29:57.723215       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:30:00.394136       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:30:01.765954       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:30:01.962849       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:30:07.055850       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:30:07.224432       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1129 09:30:08.724474       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1129 09:30:12.192621       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [99a25a5d8939a18228694eb456392302e9c83463a0275b2753d434deae57f1ee] <==
	I1129 09:30:16.152761       1 policy_source.go:240] refreshing policies
	I1129 09:30:16.159205       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1129 09:30:16.159574       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1129 09:30:16.180370       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1129 09:30:16.186875       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1129 09:30:16.192984       1 aggregator.go:171] initial CRD sync complete...
	I1129 09:30:16.193004       1 autoregister_controller.go:144] Starting autoregister controller
	I1129 09:30:16.193010       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1129 09:30:16.193017       1 cache.go:39] Caches are synced for autoregister controller
	I1129 09:30:16.194395       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1129 09:30:16.194445       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1129 09:30:16.194500       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1129 09:30:16.194592       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1129 09:30:16.194620       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1129 09:30:16.215874       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1129 09:30:16.226355       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1129 09:30:17.047435       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1129 09:30:17.227324       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1129 09:30:18.141683       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1129 09:30:18.234720       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1129 09:30:18.287642       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1129 09:30:18.300430       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1129 09:30:19.667778       1 controller.go:667] quota admission added evaluator for: endpoints
	I1129 09:30:19.720480       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1129 09:30:19.865022       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [49386cd4b239787192e49261e28712a3706738c55e7526c54f9bc6f46fe925b4] <==
	I1129 09:29:52.384815       1 serving.go:386] Generated self-signed cert in-memory
	I1129 09:29:52.609904       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1129 09:29:52.609931       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:29:52.611778       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1129 09:29:52.611916       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1129 09:29:52.612642       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1129 09:29:52.613296       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1129 09:30:13.201460       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.83.104:8443/healthz\": dial tcp 192.168.83.104:8443: connect: connection refused"
	
	
	==> kube-controller-manager [ebedaca83ba826a1dbb5a46ab2511030acc3b00245a2abecd907793732b610d2] <==
	I1129 09:30:19.560237       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1129 09:30:19.560557       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:30:19.560673       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1129 09:30:19.560791       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1129 09:30:19.560865       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-893760"
	I1129 09:30:19.560914       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1129 09:30:19.561642       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1129 09:30:19.561801       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1129 09:30:19.562914       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1129 09:30:19.563002       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1129 09:30:19.563043       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1129 09:30:19.563086       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1129 09:30:19.564461       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1129 09:30:19.564773       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1129 09:30:19.566487       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1129 09:30:19.572670       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1129 09:30:19.577994       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1129 09:30:19.579986       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1129 09:30:19.589584       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1129 09:30:19.589626       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1129 09:30:19.589637       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1129 09:30:19.596977       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1129 09:30:19.596982       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1129 09:30:19.601315       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1129 09:30:19.869608       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [d8c8f96dff7d0d88bc3f9e905b659365005dcc3c0ab3a617d5aa75138ca581fd] <==
	
	
	==> kube-proxy [eb4aed02a347d4f806f74d29f691b160f1752223360e1f4993891bc19937acc9] <==
	I1129 09:30:17.764211       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1129 09:30:17.864664       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1129 09:30:17.865369       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.104"]
	E1129 09:30:17.865493       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1129 09:30:17.915252       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1129 09:30:17.915376       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1129 09:30:17.915404       1 server_linux.go:132] "Using iptables Proxier"
	I1129 09:30:17.936673       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1129 09:30:17.937397       1 server.go:527] "Version info" version="v1.34.1"
	I1129 09:30:17.937434       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:30:17.946904       1 config.go:200] "Starting service config controller"
	I1129 09:30:17.946965       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1129 09:30:17.947000       1 config.go:106] "Starting endpoint slice config controller"
	I1129 09:30:17.947007       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1129 09:30:17.947023       1 config.go:403] "Starting serviceCIDR config controller"
	I1129 09:30:17.947029       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1129 09:30:17.949252       1 config.go:309] "Starting node config controller"
	I1129 09:30:17.950424       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1129 09:30:17.950494       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1129 09:30:18.047165       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1129 09:30:18.047228       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1129 09:30:18.047334       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [178b3ab1cb251b3a9f7c21cd176343ca8ae0a3af11799761ee56e2de3cedd41b] <==
	I1129 09:28:55.991245       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1129 09:28:56.000587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1129 09:28:56.001026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1129 09:28:56.001083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1129 09:28:56.001207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1129 09:28:56.001422       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1129 09:28:56.001435       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1129 09:28:56.001529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1129 09:28:56.001630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1129 09:28:56.001662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1129 09:28:56.001856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1129 09:28:56.001951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1129 09:28:56.002088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1129 09:28:56.001959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1129 09:28:56.002430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1129 09:28:56.002460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1129 09:28:56.002578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1129 09:28:56.002224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1129 09:28:56.002638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1129 09:28:56.002663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1129 09:28:57.591662       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:29:41.655626       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1129 09:29:41.658484       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1129 09:29:41.663585       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1129 09:29:41.663626       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ae36582cbc8207da290442472aef7150dd5654da51d2c6bfb156077457c3420e] <==
	I1129 09:30:14.876137       1 serving.go:386] Generated self-signed cert in-memory
	W1129 09:30:16.114483       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1129 09:30:16.114521       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1129 09:30:16.114530       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1129 09:30:16.114536       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1129 09:30:16.191418       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1129 09:30:16.191466       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1129 09:30:16.207323       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:30:16.207371       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1129 09:30:16.207907       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1129 09:30:16.208022       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1129 09:30:16.308357       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 29 09:30:15 pause-893760 kubelet[3612]: E1129 09:30:15.522516    3612 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-893760\" not found" node="pause-893760"
	Nov 29 09:30:15 pause-893760 kubelet[3612]: E1129 09:30:15.523338    3612 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-893760\" not found" node="pause-893760"
	Nov 29 09:30:15 pause-893760 kubelet[3612]: E1129 09:30:15.523762    3612 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-893760\" not found" node="pause-893760"
	Nov 29 09:30:15 pause-893760 kubelet[3612]: E1129 09:30:15.524201    3612 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-893760\" not found" node="pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.196394    3612 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.236072    3612 kubelet_node_status.go:124] "Node was previously registered" node="pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.236339    3612 kubelet_node_status.go:78] "Successfully registered node" node="pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.236406    3612 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.241248    3612 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: E1129 09:30:16.252900    3612 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-893760\" already exists" pod="kube-system/etcd-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.252969    3612 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: E1129 09:30:16.267082    3612 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-893760\" already exists" pod="kube-system/kube-apiserver-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.267126    3612 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: E1129 09:30:16.283098    3612 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-893760\" already exists" pod="kube-system/kube-controller-manager-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.283145    3612 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: E1129 09:30:16.293501    3612 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-893760\" already exists" pod="kube-system/kube-scheduler-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: I1129 09:30:16.524004    3612 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-893760"
	Nov 29 09:30:16 pause-893760 kubelet[3612]: E1129 09:30:16.539688    3612 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-893760\" already exists" pod="kube-system/kube-apiserver-pause-893760"
	Nov 29 09:30:17 pause-893760 kubelet[3612]: I1129 09:30:17.179134    3612 apiserver.go:52] "Watching apiserver"
	Nov 29 09:30:17 pause-893760 kubelet[3612]: I1129 09:30:17.195458    3612 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 29 09:30:17 pause-893760 kubelet[3612]: I1129 09:30:17.221029    3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d0fdc57-ce2f-483b-82f2-006931b3ab39-xtables-lock\") pod \"kube-proxy-rzkwr\" (UID: \"8d0fdc57-ce2f-483b-82f2-006931b3ab39\") " pod="kube-system/kube-proxy-rzkwr"
	Nov 29 09:30:17 pause-893760 kubelet[3612]: I1129 09:30:17.221092    3612 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d0fdc57-ce2f-483b-82f2-006931b3ab39-lib-modules\") pod \"kube-proxy-rzkwr\" (UID: \"8d0fdc57-ce2f-483b-82f2-006931b3ab39\") " pod="kube-system/kube-proxy-rzkwr"
	Nov 29 09:30:17 pause-893760 kubelet[3612]: I1129 09:30:17.484635    3612 scope.go:117] "RemoveContainer" containerID="d8c8f96dff7d0d88bc3f9e905b659365005dcc3c0ab3a617d5aa75138ca581fd"
	Nov 29 09:30:23 pause-893760 kubelet[3612]: E1129 09:30:23.382787    3612 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1764408623382011531  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 29 09:30:23 pause-893760 kubelet[3612]: E1129 09:30:23.382930    3612 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1764408623382011531  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-893760 -n pause-893760
helpers_test.go:269: (dbg) Run:  kubectl --context pause-893760 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (53.07s)

                                                
                                    

Test pass (300/345)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 22.74
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.17
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 12.1
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.44
18 TestDownloadOnly/v1.34.1/DeleteAll 0.44
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.64
22 TestOffline 106.24
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 128.76
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 10.53
35 TestAddons/parallel/Registry 16.29
36 TestAddons/parallel/RegistryCreds 0.72
38 TestAddons/parallel/InspektorGadget 11.73
39 TestAddons/parallel/MetricsServer 6.06
41 TestAddons/parallel/CSI 58.89
42 TestAddons/parallel/Headlamp 20.5
43 TestAddons/parallel/CloudSpanner 6.6
44 TestAddons/parallel/LocalPath 57.42
45 TestAddons/parallel/NvidiaDevicePlugin 6.93
46 TestAddons/parallel/Yakd 11.87
48 TestAddons/StoppedEnableDisable 81
49 TestCertOptions 63.91
50 TestCertExpiration 284.77
52 TestForceSystemdFlag 74.22
53 TestForceSystemdEnv 53.99
58 TestErrorSpam/setup 35.37
59 TestErrorSpam/start 0.34
60 TestErrorSpam/status 0.67
61 TestErrorSpam/pause 1.52
62 TestErrorSpam/unpause 1.69
63 TestErrorSpam/stop 5.23
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 95.63
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 59.9
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.46
75 TestFunctional/serial/CacheCmd/cache/add_local 2.13
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 36.26
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.35
86 TestFunctional/serial/LogsFileCmd 1.37
87 TestFunctional/serial/InvalidService 4.62
89 TestFunctional/parallel/ConfigCmd 0.44
90 TestFunctional/parallel/DashboardCmd 17.07
91 TestFunctional/parallel/DryRun 0.22
92 TestFunctional/parallel/InternationalLanguage 0.12
93 TestFunctional/parallel/StatusCmd 0.75
97 TestFunctional/parallel/ServiceCmdConnect 18.55
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 45.66
101 TestFunctional/parallel/SSHCmd 0.39
102 TestFunctional/parallel/CpCmd 1.15
103 TestFunctional/parallel/MySQL 24.87
104 TestFunctional/parallel/FileSync 0.17
105 TestFunctional/parallel/CertSync 1
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
113 TestFunctional/parallel/License 0.37
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.22
115 TestFunctional/parallel/Version/short 0.07
116 TestFunctional/parallel/Version/components 0.72
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.32
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
122 TestFunctional/parallel/ImageCommands/Setup 1.75
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.29
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.09
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.83
131 TestFunctional/parallel/ServiceCmd/List 0.3
132 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.49
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
135 TestFunctional/parallel/ServiceCmd/Format 0.31
136 TestFunctional/parallel/ServiceCmd/URL 0.31
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.6
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
148 TestFunctional/parallel/ProfileCmd/profile_list 0.31
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
150 TestFunctional/parallel/MountCmd/any-port 12.18
151 TestFunctional/parallel/MountCmd/specific-port 1.53
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.04
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 257.03
161 TestMultiControlPlane/serial/DeployApp 7.4
162 TestMultiControlPlane/serial/PingHostFromPods 1.32
163 TestMultiControlPlane/serial/AddWorkerNode 47.14
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.68
166 TestMultiControlPlane/serial/CopyFile 10.73
167 TestMultiControlPlane/serial/StopSecondaryNode 80.09
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.52
169 TestMultiControlPlane/serial/RestartSecondaryNode 34.09
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.86
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 367.8
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.29
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
174 TestMultiControlPlane/serial/StopCluster 250.16
175 TestMultiControlPlane/serial/RestartCluster 92.79
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
177 TestMultiControlPlane/serial/AddSecondaryNode 105.95
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.68
183 TestJSONOutput/start/Command 78.6
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.71
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.65
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.94
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.22
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 77.76
215 TestMountStart/serial/StartWithMountFirst 22.32
216 TestMountStart/serial/VerifyMountFirst 0.3
217 TestMountStart/serial/StartWithMountSecond 19.99
218 TestMountStart/serial/VerifyMountSecond 0.31
219 TestMountStart/serial/DeleteFirst 0.73
220 TestMountStart/serial/VerifyMountPostDelete 0.32
221 TestMountStart/serial/Stop 1.28
222 TestMountStart/serial/RestartStopped 19.65
223 TestMountStart/serial/VerifyMountPostStop 0.31
226 TestMultiNode/serial/FreshStart2Nodes 125.12
227 TestMultiNode/serial/DeployApp2Nodes 6.24
228 TestMultiNode/serial/PingHostFrom2Pods 0.85
229 TestMultiNode/serial/AddNode 42.66
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.46
232 TestMultiNode/serial/CopyFile 6.04
233 TestMultiNode/serial/StopNode 2.22
234 TestMultiNode/serial/StartAfterStop 40.44
235 TestMultiNode/serial/RestartKeepsNodes 292.23
236 TestMultiNode/serial/DeleteNode 2.66
237 TestMultiNode/serial/StopMultiNode 179.58
238 TestMultiNode/serial/RestartMultiNode 85.65
239 TestMultiNode/serial/ValidateNameConflict 38.42
246 TestScheduledStopUnix 107.84
250 TestRunningBinaryUpgrade 392.88
252 TestKubernetesUpgrade 153.58
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/StartWithK8s 78.45
257 TestNoKubernetes/serial/StartWithStopK8s 24.14
258 TestNoKubernetes/serial/Start 28.19
266 TestNetworkPlugins/group/false 4.27
270 TestISOImage/Setup 36.21
271 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.15
273 TestNoKubernetes/serial/ProfileList 19.75
274 TestNoKubernetes/serial/Stop 1.81
275 TestNoKubernetes/serial/StartNoArgs 18.17
277 TestISOImage/Binaries/crictl 0.21
278 TestISOImage/Binaries/curl 0.17
279 TestISOImage/Binaries/docker 0.27
280 TestISOImage/Binaries/git 0.2
281 TestISOImage/Binaries/iptables 0.18
282 TestISOImage/Binaries/podman 0.2
283 TestISOImage/Binaries/rsync 0.19
284 TestISOImage/Binaries/socat 0.18
285 TestISOImage/Binaries/wget 0.18
286 TestISOImage/Binaries/VBoxControl 0.2
287 TestISOImage/Binaries/VBoxService 0.18
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
297 TestPause/serial/Start 100.44
298 TestStoppedBinaryUpgrade/Setup 3.19
299 TestStoppedBinaryUpgrade/Upgrade 73.92
301 TestStoppedBinaryUpgrade/MinikubeLogs 1.26
302 TestNetworkPlugins/group/auto/Start 79.69
303 TestNetworkPlugins/group/enable-default-cni/Start 84.29
304 TestNetworkPlugins/group/flannel/Start 83.71
305 TestNetworkPlugins/group/auto/KubeletFlags 0.21
306 TestNetworkPlugins/group/auto/NetCatPod 12.29
307 TestNetworkPlugins/group/auto/DNS 0.19
308 TestNetworkPlugins/group/auto/Localhost 0.16
309 TestNetworkPlugins/group/auto/HairPin 0.17
310 TestNetworkPlugins/group/bridge/Start 83.15
311 TestNetworkPlugins/group/calico/Start 100.1
312 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.2
313 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.25
314 TestNetworkPlugins/group/flannel/ControllerPod 6.01
315 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
316 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
317 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
318 TestNetworkPlugins/group/flannel/KubeletFlags 0.46
319 TestNetworkPlugins/group/flannel/NetCatPod 12.31
320 TestNetworkPlugins/group/custom-flannel/Start 77.29
321 TestNetworkPlugins/group/flannel/DNS 0.17
322 TestNetworkPlugins/group/flannel/Localhost 0.13
323 TestNetworkPlugins/group/flannel/HairPin 0.14
324 TestNetworkPlugins/group/kindnet/Start 70.01
325 TestNetworkPlugins/group/bridge/KubeletFlags 0.49
326 TestNetworkPlugins/group/bridge/NetCatPod 12.33
327 TestNetworkPlugins/group/bridge/DNS 0.19
328 TestNetworkPlugins/group/bridge/Localhost 0.15
329 TestNetworkPlugins/group/bridge/HairPin 0.17
330 TestNetworkPlugins/group/calico/ControllerPod 6.01
331 TestNetworkPlugins/group/calico/KubeletFlags 0.19
332 TestNetworkPlugins/group/calico/NetCatPod 11.26
334 TestStartStop/group/old-k8s-version/serial/FirstStart 98.52
335 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.2
336 TestNetworkPlugins/group/custom-flannel/NetCatPod 17.31
337 TestNetworkPlugins/group/calico/DNS 0.16
338 TestNetworkPlugins/group/calico/Localhost 0.13
339 TestNetworkPlugins/group/calico/HairPin 0.16
340 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
341 TestNetworkPlugins/group/custom-flannel/DNS 0.16
342 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
343 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
344 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
345 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
347 TestStartStop/group/no-preload/serial/FirstStart 110.29
348 TestNetworkPlugins/group/kindnet/DNS 0.51
349 TestNetworkPlugins/group/kindnet/Localhost 0.18
350 TestNetworkPlugins/group/kindnet/HairPin 0.21
352 TestStartStop/group/embed-certs/serial/FirstStart 99.03
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 102.61
355 TestStartStop/group/old-k8s-version/serial/DeployApp 11.34
356 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.2
357 TestStartStop/group/old-k8s-version/serial/Stop 89.22
358 TestStartStop/group/no-preload/serial/DeployApp 11.28
359 TestStartStop/group/embed-certs/serial/DeployApp 11.28
360 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.04
361 TestStartStop/group/no-preload/serial/Stop 72.5
362 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.91
363 TestStartStop/group/embed-certs/serial/Stop 88.19
364 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
365 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.97
366 TestStartStop/group/default-k8s-diff-port/serial/Stop 84.51
367 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
368 TestStartStop/group/old-k8s-version/serial/SecondStart 44.81
369 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
370 TestStartStop/group/no-preload/serial/SecondStart 60.35
371 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
372 TestStartStop/group/embed-certs/serial/SecondStart 45.48
373 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 11.01
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
375 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 56.62
376 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
377 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.38
378 TestStartStop/group/old-k8s-version/serial/Pause 3.97
380 TestStartStop/group/newest-cni/serial/FirstStart 56.96
381 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 20.01
382 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 17.01
383 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
384 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
385 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 15.01
386 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.45
387 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.41
388 TestStartStop/group/no-preload/serial/Pause 3.48
389 TestStartStop/group/embed-certs/serial/Pause 3.19
391 TestISOImage/PersistentMounts//data 0.31
392 TestISOImage/PersistentMounts//var/lib/docker 0.19
393 TestISOImage/PersistentMounts//var/lib/cni 0.19
394 TestISOImage/PersistentMounts//var/lib/kubelet 0.19
395 TestISOImage/PersistentMounts//var/lib/minikube 0.18
396 TestISOImage/PersistentMounts//var/lib/toolbox 0.19
397 TestISOImage/PersistentMounts//var/lib/boot2docker 0.18
398 TestISOImage/VersionJSON 0.17
399 TestISOImage/eBPFSupport 0.19
400 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
401 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
402 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.8
403 TestStartStop/group/newest-cni/serial/DeployApp 0
404 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.09
405 TestStartStop/group/newest-cni/serial/Stop 10.56
406 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
407 TestStartStop/group/newest-cni/serial/SecondStart 31.96
408 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
409 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
410 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.19
411 TestStartStop/group/newest-cni/serial/Pause 2.24
x
+
TestDownloadOnly/v1.28.0/json-events (22.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-109351 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-109351 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (22.740781576s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (22.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1129 08:28:50.392296    9613 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1129 08:28:50.392367    9613 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-109351
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-109351: exit status 85 (73.14813ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-109351 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-109351 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 08:28:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 08:28:27.707205    9625 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:28:27.707424    9625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:27.707434    9625 out.go:374] Setting ErrFile to fd 2...
	I1129 08:28:27.707438    9625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:27.707676    9625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
	W1129 08:28:27.707881    9625 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22000-5651/.minikube/config/config.json: open /home/jenkins/minikube-integration/22000-5651/.minikube/config/config.json: no such file or directory
	I1129 08:28:27.708400    9625 out.go:368] Setting JSON to true
	I1129 08:28:27.709354    9625 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":652,"bootTime":1764404256,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 08:28:27.709412    9625 start.go:143] virtualization: kvm guest
	I1129 08:28:27.713929    9625 out.go:99] [download-only-109351] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1129 08:28:27.714095    9625 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball: no such file or directory
	I1129 08:28:27.714124    9625 notify.go:221] Checking for updates...
	I1129 08:28:27.715314    9625 out.go:171] MINIKUBE_LOCATION=22000
	I1129 08:28:27.716629    9625 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:28:27.717732    9625 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig
	I1129 08:28:27.718967    9625 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube
	I1129 08:28:27.719837    9625 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1129 08:28:27.721676    9625 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1129 08:28:27.722069    9625 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:28:28.224649    9625 out.go:99] Using the kvm2 driver based on user configuration
	I1129 08:28:28.224694    9625 start.go:309] selected driver: kvm2
	I1129 08:28:28.224700    9625 start.go:927] validating driver "kvm2" against <nil>
	I1129 08:28:28.225071    9625 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 08:28:28.225576    9625 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1129 08:28:28.225754    9625 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1129 08:28:28.225782    9625 cni.go:84] Creating CNI manager for ""
	I1129 08:28:28.225851    9625 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1129 08:28:28.225864    9625 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1129 08:28:28.225921    9625 start.go:353] cluster config:
	{Name:download-only-109351 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-109351 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:28:28.226150    9625 iso.go:125] acquiring lock: {Name:mk0184b92a126aea44cd27d4836c247b817b0333 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 08:28:28.227694    9625 out.go:99] Downloading VM boot image ...
	I1129 08:28:28.227726    9625 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22000-5651/.minikube/cache/iso/amd64/minikube-v1.37.0-1763503576-21924-amd64.iso
	I1129 08:28:38.010756    9625 out.go:99] Starting "download-only-109351" primary control-plane node in "download-only-109351" cluster
	I1129 08:28:38.010786    9625 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1129 08:28:38.101347    9625 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1129 08:28:38.101382    9625 cache.go:65] Caching tarball of preloaded images
	I1129 08:28:38.101541    9625 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1129 08:28:38.103528    9625 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1129 08:28:38.103554    9625 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1129 08:28:38.204777    9625 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1129 08:28:38.204939    9625 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-109351 host does not exist
	  To start a cluster, run: "minikube start -p download-only-109351"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-109351
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (12.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-915524 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-915524 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.096353833s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (12.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1129 08:29:02.873683    9613 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1129 08:29:02.873763    9613 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-915524
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-915524: exit status 85 (441.282291ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-109351 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-109351 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │ 29 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-109351                                                                                                                                                 │ download-only-109351 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │ 29 Nov 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-915524 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-915524 │ jenkins │ v1.37.0 │ 29 Nov 25 08:28 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/29 08:28:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1129 08:28:50.830993    9865 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:28:50.831202    9865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:50.831212    9865 out.go:374] Setting ErrFile to fd 2...
	I1129 08:28:50.831215    9865 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:28:50.831408    9865 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
	I1129 08:28:50.831861    9865 out.go:368] Setting JSON to true
	I1129 08:28:50.832677    9865 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":675,"bootTime":1764404256,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 08:28:50.832732    9865 start.go:143] virtualization: kvm guest
	I1129 08:28:50.834847    9865 out.go:99] [download-only-915524] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 08:28:50.835018    9865 notify.go:221] Checking for updates...
	I1129 08:28:50.836615    9865 out.go:171] MINIKUBE_LOCATION=22000
	I1129 08:28:50.838433    9865 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:28:50.839836    9865 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig
	I1129 08:28:50.841512    9865 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube
	I1129 08:28:50.843023    9865 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1129 08:28:50.846028    9865 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1129 08:28:50.846324    9865 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:28:50.881961    9865 out.go:99] Using the kvm2 driver based on user configuration
	I1129 08:28:50.881998    9865 start.go:309] selected driver: kvm2
	I1129 08:28:50.882005    9865 start.go:927] validating driver "kvm2" against <nil>
	I1129 08:28:50.882308    9865 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1129 08:28:50.882813    9865 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1129 08:28:50.882980    9865 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1129 08:28:50.883007    9865 cni.go:84] Creating CNI manager for ""
	I1129 08:28:50.883052    9865 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1129 08:28:50.883066    9865 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1129 08:28:50.883110    9865 start.go:353] cluster config:
	{Name:download-only-915524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-915524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:28:50.883209    9865 iso.go:125] acquiring lock: {Name:mk0184b92a126aea44cd27d4836c247b817b0333 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1129 08:28:50.884870    9865 out.go:99] Starting "download-only-915524" primary control-plane node in "download-only-915524" cluster
	I1129 08:28:50.884900    9865 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 08:28:50.977201    9865 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1129 08:28:50.977236    9865 cache.go:65] Caching tarball of preloaded images
	I1129 08:28:50.977429    9865 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1129 08:28:50.979417    9865 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1129 08:28:50.979437    9865 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1129 08:28:51.078276    9865 preload.go:295] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1129 08:28:51.078325    9865 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/22000-5651/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-915524 host does not exist
	  To start a cluster, run: "minikube start -p download-only-915524"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-915524
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
I1129 08:29:04.190864    9613 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-065244 --alsologtostderr --binary-mirror http://127.0.0.1:34259 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-065244" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-065244
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (106.24s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-269571 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-269571 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m45.343260491s)
helpers_test.go:175: Cleaning up "offline-crio-269571" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-269571
--- PASS: TestOffline (106.24s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-213983
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-213983: exit status 85 (61.113405ms)

                                                
                                                
-- stdout --
	* Profile "addons-213983" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-213983"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-213983
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-213983: exit status 85 (62.042967ms)

                                                
                                                
-- stdout --
	* Profile "addons-213983" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-213983"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (128.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-213983 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-213983 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m8.76446067s)
--- PASS: TestAddons/Setup (128.76s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-213983 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-213983 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-213983 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-213983 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a81c966d-80ea-4cb4-af63-4079ae7f315c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a81c966d-80ea-4cb4-af63-4079ae7f315c] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003622221s
addons_test.go:694: (dbg) Run:  kubectl --context addons-213983 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-213983 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-213983 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.444534ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-pw672" [464566ae-151b-4294-8a2a-b34e5c6562ec] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006120564s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-7cbkh" [73ffdfdf-bd95-4081-8154-0ffcb209c237] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004236543s
addons_test.go:392: (dbg) Run:  kubectl --context addons-213983 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-213983 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-213983 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.546741901s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.29s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.76615ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-213983
addons_test.go:332: (dbg) Run:  kubectl --context addons-213983 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.72s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-44frw" [82785030-bded-432f-be02-75a1101403d9] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004470404s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-213983 addons disable inspektor-gadget --alsologtostderr -v=1: (5.720421801s)
--- PASS: TestAddons/parallel/InspektorGadget (11.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.06s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 8.740665ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-frgcn" [f71b082d-7406-491a-9d31-dd48f8c0106e] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004806935s
addons_test.go:463: (dbg) Run:  kubectl --context addons-213983 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.06s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1129 08:31:46.490949    9613 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1129 08:31:46.496181    9613 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1129 08:31:46.496212    9613 kapi.go:107] duration metric: took 5.282942ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 5.296274ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-213983 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/11/29 08:31:48 [DEBUG] GET http://192.168.39.35:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-213983 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [a45e4f77-0a56-4e78-97d8-0fa7aaa70e55] Pending
helpers_test.go:352: "task-pv-pod" [a45e4f77-0a56-4e78-97d8-0fa7aaa70e55] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [a45e4f77-0a56-4e78-97d8-0fa7aaa70e55] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004078315s
addons_test.go:572: (dbg) Run:  kubectl --context addons-213983 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-213983 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-213983 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-213983 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-213983 delete pod task-pv-pod: (1.083092313s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-213983 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-213983 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-213983 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [25abfc80-fb61-4e3f-aaf5-27b418b0e093] Pending
helpers_test.go:352: "task-pv-pod-restore" [25abfc80-fb61-4e3f-aaf5-27b418b0e093] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [25abfc80-fb61-4e3f-aaf5-27b418b0e093] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004607252s
addons_test.go:614: (dbg) Run:  kubectl --context addons-213983 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-213983 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-213983 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-213983 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.942513491s)
--- PASS: TestAddons/parallel/CSI (58.89s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-213983 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-t8chz" [3e2d1ef4-645b-419d-9729-124737022379] Pending
helpers_test.go:352: "headlamp-dfcdc64b-t8chz" [3e2d1ef4-645b-419d-9729-124737022379] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-t8chz" [3e2d1ef4-645b-419d-9729-124737022379] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004881637s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-213983 addons disable headlamp --alsologtostderr -v=1: (6.560650414s)
--- PASS: TestAddons/parallel/Headlamp (20.50s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-gz7fm" [aa402f24-8ae3-4d64-8da6-3724988c7a01] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004214475s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.42s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-213983 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-213983 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-213983 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [db7c60f5-4609-47c4-90c1-184ca3e784a1] Pending
helpers_test.go:352: "test-local-path" [db7c60f5-4609-47c4-90c1-184ca3e784a1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [db7c60f5-4609-47c4-90c1-184ca3e784a1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [db7c60f5-4609-47c4-90c1-184ca3e784a1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004033363s
addons_test.go:967: (dbg) Run:  kubectl --context addons-213983 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 ssh "cat /opt/local-path-provisioner/pvc-201f1235-cf8d-4120-9ec7-7fe42aca63d3_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-213983 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-213983 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-213983 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.331164578s)
--- PASS: TestAddons/parallel/LocalPath (57.42s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.93s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-c9l66" [5e8b5d05-ea15-45b9-8a44-7d40d4d34c68] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006831607s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.93s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-vmxbd" [11982eee-5ecb-4196-ae84-bf161405ed64] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00561291s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-213983 addons disable yakd --alsologtostderr -v=1: (5.865513591s)
--- PASS: TestAddons/parallel/Yakd (11.87s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (81s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-213983
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-213983: (1m20.799148678s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-213983
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-213983
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-213983
--- PASS: TestAddons/StoppedEnableDisable (81.00s)

                                                
                                    
x
+
TestCertOptions (63.91s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-648964 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-648964 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m2.583739693s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-648964 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-648964 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-648964 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-648964" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-648964
--- PASS: TestCertOptions (63.91s)

                                                
                                    
x
+
TestCertExpiration (284.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-369885 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-369885 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m19.31576901s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-369885 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-369885 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (24.456474879s)
helpers_test.go:175: Cleaning up "cert-expiration-369885" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-369885
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-369885: (1.001073458s)
--- PASS: TestCertExpiration (284.77s)

                                                
                                    
x
+
TestForceSystemdFlag (74.22s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-325714 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-325714 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m13.142426234s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-325714 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-325714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-325714
--- PASS: TestForceSystemdFlag (74.22s)

                                                
                                    
x
+
TestForceSystemdEnv (53.99s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-743631 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-743631 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (53.050929787s)
helpers_test.go:175: Cleaning up "force-systemd-env-743631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-743631
--- PASS: TestForceSystemdEnv (53.99s)

                                                
                                    
x
+
TestErrorSpam/setup (35.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-309260 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-309260 --driver=kvm2  --container-runtime=crio
E1129 08:36:14.295143    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:14.301524    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:14.312971    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:14.334435    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:14.375929    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:14.457416    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:14.618973    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:14.940711    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:15.582810    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:16.864477    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:19.427446    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:36:24.549608    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-309260 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-309260 --driver=kvm2  --container-runtime=crio: (35.37114798s)
--- PASS: TestErrorSpam/setup (35.37s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 status
--- PASS: TestErrorSpam/status (0.67s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (5.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 stop: (1.840312401s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 stop: (1.801869024s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 stop
E1129 08:36:34.791260    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-309260 --log_dir /tmp/nospam-309260 stop: (1.584303977s)
--- PASS: TestErrorSpam/stop (5.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22000-5651/.minikube/files/etc/test/nested/copy/9613/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (95.63s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180687 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1129 08:36:55.273072    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:37:36.235549    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-180687 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m35.630350042s)
--- PASS: TestFunctional/serial/StartWithProxy (95.63s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (59.9s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1129 08:38:11.540948    9613 config.go:182] Loaded profile config "functional-180687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180687 --alsologtostderr -v=8
E1129 08:38:58.158055    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-180687 --alsologtostderr -v=8: (59.895879367s)
functional_test.go:678: soft start took 59.896569872s for "functional-180687" cluster.
I1129 08:39:11.437190    9613 config.go:182] Loaded profile config "functional-180687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (59.90s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-180687 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-180687 cache add registry.k8s.io/pause:3.1: (1.111526699s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-180687 cache add registry.k8s.io/pause:3.3: (1.171739253s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-180687 cache add registry.k8s.io/pause:latest: (1.172367968s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-180687 /tmp/TestFunctionalserialCacheCmdcacheadd_local1552594695/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 cache add minikube-local-cache-test:functional-180687
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-180687 cache add minikube-local-cache-test:functional-180687: (1.760343288s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 cache delete minikube-local-cache-test:functional-180687
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-180687
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180687 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (170.938851ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 kubectl -- --context functional-180687 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-180687 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180687 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-180687 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.254753516s)
functional_test.go:776: restart took 36.254854029s for "functional-180687" cluster.
I1129 08:39:55.617372    9613 config.go:182] Loaded profile config "functional-180687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (36.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-180687 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-180687 logs: (1.348993352s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 logs --file /tmp/TestFunctionalserialLogsFileCmd2193644590/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-180687 logs --file /tmp/TestFunctionalserialLogsFileCmd2193644590/001/logs.txt: (1.365647095s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-180687 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-180687
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-180687: exit status 115 (237.797046ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.50:30968 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-180687 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-180687 delete -f testdata/invalidsvc.yaml: (1.182051997s)
--- PASS: TestFunctional/serial/InvalidService (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180687 config get cpus: exit status 14 (76.292012ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180687 config get cpus: exit status 14 (71.505537ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-180687 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-180687 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 15869: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180687 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-180687 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (109.784669ms)

                                                
                                                
-- stdout --
	* [functional-180687] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:40:29.787855   15809 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:40:29.787989   15809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:40:29.787998   15809 out.go:374] Setting ErrFile to fd 2...
	I1129 08:40:29.788002   15809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:40:29.788693   15809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
	I1129 08:40:29.789593   15809 out.go:368] Setting JSON to false
	I1129 08:40:29.790455   15809 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1374,"bootTime":1764404256,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 08:40:29.790567   15809 start.go:143] virtualization: kvm guest
	I1129 08:40:29.792211   15809 out.go:179] * [functional-180687] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 08:40:29.793650   15809 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 08:40:29.793677   15809 notify.go:221] Checking for updates...
	I1129 08:40:29.796340   15809 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:40:29.797484   15809 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig
	I1129 08:40:29.798489   15809 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube
	I1129 08:40:29.799564   15809 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 08:40:29.800676   15809 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 08:40:29.802129   15809 config.go:182] Loaded profile config "functional-180687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:40:29.802651   15809 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:40:29.834273   15809 out.go:179] * Using the kvm2 driver based on existing profile
	I1129 08:40:29.835432   15809 start.go:309] selected driver: kvm2
	I1129 08:40:29.835447   15809 start.go:927] validating driver "kvm2" against &{Name:functional-180687 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-180687 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:40:29.835558   15809 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 08:40:29.837405   15809 out.go:203] 
	W1129 08:40:29.838502   15809 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1129 08:40:29.839553   15809 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180687 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180687 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-180687 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (115.518091ms)

                                                
                                                
-- stdout --
	* [functional-180687] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:40:30.008744   15841 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:40:30.008868   15841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:40:30.008880   15841 out.go:374] Setting ErrFile to fd 2...
	I1129 08:40:30.008887   15841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:40:30.009169   15841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
	I1129 08:40:30.009609   15841 out.go:368] Setting JSON to false
	I1129 08:40:30.010512   15841 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1374,"bootTime":1764404256,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 08:40:30.010579   15841 start.go:143] virtualization: kvm guest
	I1129 08:40:30.012750   15841 out.go:179] * [functional-180687] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1129 08:40:30.014383   15841 notify.go:221] Checking for updates...
	I1129 08:40:30.014432   15841 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 08:40:30.016199   15841 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 08:40:30.017716   15841 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig
	I1129 08:40:30.019297   15841 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube
	I1129 08:40:30.020634   15841 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 08:40:30.021843   15841 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 08:40:30.023622   15841 config.go:182] Loaded profile config "functional-180687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:40:30.024120   15841 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 08:40:30.057153   15841 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1129 08:40:30.058293   15841 start.go:309] selected driver: kvm2
	I1129 08:40:30.058307   15841 start.go:927] validating driver "kvm2" against &{Name:functional-180687 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-180687 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.50 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1129 08:40:30.058456   15841 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 08:40:30.060762   15841 out.go:203] 
	W1129 08:40:30.061880   15841 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1129 08:40:30.063042   15841 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (18.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-180687 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-180687 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-g9kq4" [c490eb3b-4c97-4afb-b6cf-81c64d053737] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-g9kq4" [c490eb3b-4c97-4afb-b6cf-81c64d053737] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 18.004515474s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.50:31080
functional_test.go:1680: http://192.168.39.50:31080: success! body:
Request served by hello-node-connect-7d85dfc575-g9kq4

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.50:31080
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (18.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [29dc6fdd-a277-46be-bac9-c7332595fcf9] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006335971s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-180687 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-180687 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-180687 get pvc myclaim -o=json
I1129 08:40:09.647459    9613 retry.go:31] will retry after 1.799793318s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:4cd735b3-1d92-4048-907d-edc77e1ab48a ResourceVersion:844 Generation:0 CreationTimestamp:2025-11-29 08:40:09 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc00176a7d0 VolumeMode:0xc00176a7e0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-180687 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-180687 apply -f testdata/storage-provisioner/pod.yaml
I1129 08:40:11.657046    9613 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [1c0c26e4-83ca-4df8-8d00-552c24c5dd40] Pending
helpers_test.go:352: "sp-pod" [1c0c26e4-83ca-4df8-8d00-552c24c5dd40] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [1c0c26e4-83ca-4df8-8d00-552c24c5dd40] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.003213784s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-180687 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-180687 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-180687 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ff68d2f7-6ab2-47f6-bb81-146a9a7eb448] Pending
helpers_test.go:352: "sp-pod" [ff68d2f7-6ab2-47f6-bb81-146a9a7eb448] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ff68d2f7-6ab2-47f6-bb81-146a9a7eb448] Running
2025/11/29 08:40:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.004086365s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-180687 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.66s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh -n functional-180687 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 cp functional-180687:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2347091183/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh -n functional-180687 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh -n functional-180687 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-180687 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-h9wc6" [12b7dede-980e-466a-bdcd-4a0a6d2750d5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-h9wc6" [12b7dede-980e-466a-bdcd-4a0a6d2750d5] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.011483123s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-180687 exec mysql-5bb876957f-h9wc6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-180687 exec mysql-5bb876957f-h9wc6 -- mysql -ppassword -e "show databases;": exit status 1 (279.174713ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1129 08:40:25.325549    9613 retry.go:31] will retry after 656.079257ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-180687 exec mysql-5bb876957f-h9wc6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-180687 exec mysql-5bb876957f-h9wc6 -- mysql -ppassword -e "show databases;": exit status 1 (471.985217ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1129 08:40:26.454181    9613 retry.go:31] will retry after 1.29271321s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-180687 exec mysql-5bb876957f-h9wc6 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-180687 exec mysql-5bb876957f-h9wc6 -- mysql -ppassword -e "show databases;": exit status 1 (174.334048ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1129 08:40:27.922639    9613 retry.go:31] will retry after 1.689732561s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-180687 exec mysql-5bb876957f-h9wc6 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.87s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9613/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "sudo cat /etc/test/nested/copy/9613/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9613.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "sudo cat /etc/ssl/certs/9613.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9613.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "sudo cat /usr/share/ca-certificates/9613.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/96132.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "sudo cat /etc/ssl/certs/96132.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/96132.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "sudo cat /usr/share/ca-certificates/96132.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-180687 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180687 ssh "sudo systemctl is-active docker": exit status 1 (204.874614ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180687 ssh "sudo systemctl is-active containerd": exit status 1 (199.83172ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-180687 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-180687 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-q65mj" [18e812a5-55d5-47d0-a6cb-2cec5a9bb90c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-q65mj" [18e812a5-55d5-47d0-a6cb-2cec5a9bb90c] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004746382s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-180687 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-180687
localhost/kicbase/echo-server:functional-180687
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-180687 image ls --format short --alsologtostderr:
I1129 08:40:35.091604   16090 out.go:360] Setting OutFile to fd 1 ...
I1129 08:40:35.091732   16090 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:40:35.091746   16090 out.go:374] Setting ErrFile to fd 2...
I1129 08:40:35.091753   16090 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:40:35.092155   16090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
I1129 08:40:35.092945   16090 config.go:182] Loaded profile config "functional-180687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:40:35.093110   16090 config.go:182] Loaded profile config "functional-180687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:40:35.095688   16090 ssh_runner.go:195] Run: systemctl --version
I1129 08:40:35.098483   16090 main.go:143] libmachine: domain functional-180687 has defined MAC address 52:54:00:db:70:4b in network mk-functional-180687
I1129 08:40:35.098988   16090 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:db:70:4b", ip: ""} in network mk-functional-180687: {Iface:virbr1 ExpiryTime:2025-11-29 09:36:50 +0000 UTC Type:0 Mac:52:54:00:db:70:4b Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:functional-180687 Clientid:01:52:54:00:db:70:4b}
I1129 08:40:35.099029   16090 main.go:143] libmachine: domain functional-180687 has defined IP address 192.168.39.50 and MAC address 52:54:00:db:70:4b in network mk-functional-180687
I1129 08:40:35.099222   16090 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/functional-180687/id_rsa Username:docker}
I1129 08:40:35.187600   16090 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-180687 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ localhost/minikube-local-cache-test     │ functional-180687  │ b821487c62657 │ 3.33kB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-180687  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-180687 image ls --format table --alsologtostderr:
I1129 08:40:36.489031   16232 out.go:360] Setting OutFile to fd 1 ...
I1129 08:40:36.489141   16232 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:40:36.489150   16232 out.go:374] Setting ErrFile to fd 2...
I1129 08:40:36.489154   16232 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:40:36.489329   16232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
I1129 08:40:36.489859   16232 config.go:182] Loaded profile config "functional-180687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:40:36.489950   16232 config.go:182] Loaded profile config "functional-180687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:40:36.492394   16232 ssh_runner.go:195] Run: systemctl --version
I1129 08:40:36.494892   16232 main.go:143] libmachine: domain functional-180687 has defined MAC address 52:54:00:db:70:4b in network mk-functional-180687
I1129 08:40:36.495516   16232 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:db:70:4b", ip: ""} in network mk-functional-180687: {Iface:virbr1 ExpiryTime:2025-11-29 09:36:50 +0000 UTC Type:0 Mac:52:54:00:db:70:4b Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:functional-180687 Clientid:01:52:54:00:db:70:4b}
I1129 08:40:36.495547   16232 main.go:143] libmachine: domain functional-180687 has defined IP address 192.168.39.50 and MAC address 52:54:00:db:70:4b in network mk-functional-180687
I1129 08:40:36.495713   16232 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/functional-180687/id_rsa Username:docker}
I1129 08:40:36.591446   16232 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-180687 image ls --format json --alsologtostderr:
[{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93ef
c2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f
622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"
id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b
6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"rep
oTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-180687"],"size":"4945146"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"b821487c62657790e9c5836ab8b60021b750694faf1fba4bd94171037dd51b0e","repoDigests":["localhost/minikube-local-cache-test@sha256:54471eedc21407c216f91f8cd455d5362d571
bb27ac6c1fd5d883110560d7f24"],"repoTags":["localhost/minikube-local-cache-test:functional-180687"],"size":"3330"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-180687 image ls --format json --alsologtostderr:
I1129 08:40:36.282637   16221 out.go:360] Setting OutFile to fd 1 ...
I1129 08:40:36.282730   16221 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:40:36.282741   16221 out.go:374] Setting ErrFile to fd 2...
I1129 08:40:36.282746   16221 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:40:36.283015   16221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
I1129 08:40:36.283600   16221 config.go:182] Loaded profile config "functional-180687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:40:36.283716   16221 config.go:182] Loaded profile config "functional-180687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:40:36.286102   16221 ssh_runner.go:195] Run: systemctl --version
I1129 08:40:36.288535   16221 main.go:143] libmachine: domain functional-180687 has defined MAC address 52:54:00:db:70:4b in network mk-functional-180687
I1129 08:40:36.289100   16221 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:db:70:4b", ip: ""} in network mk-functional-180687: {Iface:virbr1 ExpiryTime:2025-11-29 09:36:50 +0000 UTC Type:0 Mac:52:54:00:db:70:4b Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:functional-180687 Clientid:01:52:54:00:db:70:4b}
I1129 08:40:36.289131   16221 main.go:143] libmachine: domain functional-180687 has defined IP address 192.168.39.50 and MAC address 52:54:00:db:70:4b in network mk-functional-180687
I1129 08:40:36.289370   16221 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/functional-180687/id_rsa Username:docker}
I1129 08:40:36.375366   16221 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-180687 image ls --format yaml --alsologtostderr:
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: b821487c62657790e9c5836ab8b60021b750694faf1fba4bd94171037dd51b0e
repoDigests:
- localhost/minikube-local-cache-test@sha256:54471eedc21407c216f91f8cd455d5362d571bb27ac6c1fd5d883110560d7f24
repoTags:
- localhost/minikube-local-cache-test:functional-180687
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-180687
size: "4945146"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-180687 image ls --format yaml --alsologtostderr:
I1129 08:40:35.415775   16143 out.go:360] Setting OutFile to fd 1 ...
I1129 08:40:35.416071   16143 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:40:35.416081   16143 out.go:374] Setting ErrFile to fd 2...
I1129 08:40:35.416085   16143 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1129 08:40:35.416283   16143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
I1129 08:40:35.416857   16143 config.go:182] Loaded profile config "functional-180687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:40:35.416946   16143 config.go:182] Loaded profile config "functional-180687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1129 08:40:35.419474   16143 ssh_runner.go:195] Run: systemctl --version
I1129 08:40:35.422802   16143 main.go:143] libmachine: domain functional-180687 has defined MAC address 52:54:00:db:70:4b in network mk-functional-180687
I1129 08:40:35.423349   16143 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:db:70:4b", ip: ""} in network mk-functional-180687: {Iface:virbr1 ExpiryTime:2025-11-29 09:36:50 +0000 UTC Type:0 Mac:52:54:00:db:70:4b Iaid: IPaddr:192.168.39.50 Prefix:24 Hostname:functional-180687 Clientid:01:52:54:00:db:70:4b}
I1129 08:40:35.423379   16143 main.go:143] libmachine: domain functional-180687 has defined IP address 192.168.39.50 and MAC address 52:54:00:db:70:4b in network mk-functional-180687
I1129 08:40:35.423564   16143 sshutil.go:53] new ssh client: &{IP:192.168.39.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/functional-180687/id_rsa Username:docker}
I1129 08:40:35.572192   16143 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.724998508s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-180687
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image load --daemon kicbase/echo-server:functional-180687 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-180687 image load --daemon kicbase/echo-server:functional-180687 --alsologtostderr: (1.09230764s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image load --daemon kicbase/echo-server:functional-180687 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-180687
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image load --daemon kicbase/echo-server:functional-180687 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image save kicbase/echo-server:functional-180687 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 service list -o json
functional_test.go:1504: Took "296.239084ms" to run "out/minikube-linux-amd64 -p functional-180687 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-180687 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (5.269191032s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.50:31632
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.50:31632
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-180687
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 image save --daemon kicbase/echo-server:functional-180687 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-180687
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "245.293055ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "60.894923ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "283.893508ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "66.539151ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-180687 /tmp/TestFunctionalparallelMountCmdany-port3359044810/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1764405621487241223" to /tmp/TestFunctionalparallelMountCmdany-port3359044810/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1764405621487241223" to /tmp/TestFunctionalparallelMountCmdany-port3359044810/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1764405621487241223" to /tmp/TestFunctionalparallelMountCmdany-port3359044810/001/test-1764405621487241223
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180687 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (199.252746ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1129 08:40:21.686851    9613 retry.go:31] will retry after 561.2397ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 29 08:40 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 29 08:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 29 08:40 test-1764405621487241223
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh cat /mount-9p/test-1764405621487241223
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-180687 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [ef543eca-59c4-4712-bded-75c1a6f3c8bf] Pending
helpers_test.go:352: "busybox-mount" [ef543eca-59c4-4712-bded-75c1a6f3c8bf] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [ef543eca-59c4-4712-bded-75c1a6f3c8bf] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [ef543eca-59c4-4712-bded-75c1a6f3c8bf] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 10.004240804s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-180687 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180687 /tmp/TestFunctionalparallelMountCmdany-port3359044810/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.18s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-180687 /tmp/TestFunctionalparallelMountCmdspecific-port3599229527/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180687 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (194.960907ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1129 08:40:33.862549    9613 retry.go:31] will retry after 466.580538ms: exit status 1
I1129 08:40:33.884850    9613 detect.go:223] nested VM detected
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180687 /tmp/TestFunctionalparallelMountCmdspecific-port3599229527/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180687 ssh "sudo umount -f /mount-9p": exit status 1 (208.346973ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-180687 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180687 /tmp/TestFunctionalparallelMountCmdspecific-port3599229527/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-180687 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3759004959/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-180687 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3759004959/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-180687 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3759004959/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180687 ssh "findmnt -T" /mount1: exit status 1 (182.642792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1129 08:40:35.376308    9613 retry.go:31] will retry after 273.788076ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-180687 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-180687 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180687 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3759004959/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180687 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3759004959/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180687 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3759004959/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.04s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-180687
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-180687
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-180687
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (257.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1129 08:41:14.290578    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:41:42.000068    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:45:03.242139    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:45:03.248597    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:45:03.260603    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:45:03.282300    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:45:03.323775    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:45:03.405268    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:45:03.566939    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:45:03.888777    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:45:04.531040    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:45:05.813186    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-243572 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (4m16.444666155s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (257.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- rollout status deployment/busybox
E1129 08:45:08.374490    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-243572 kubectl -- rollout status deployment/busybox: (5.057808038s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
E1129 08:45:13.496351    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- exec busybox-7b57f96db7-6qn6k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- exec busybox-7b57f96db7-bskk7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- exec busybox-7b57f96db7-c2f99 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- exec busybox-7b57f96db7-6qn6k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- exec busybox-7b57f96db7-bskk7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- exec busybox-7b57f96db7-c2f99 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- exec busybox-7b57f96db7-6qn6k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- exec busybox-7b57f96db7-bskk7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- exec busybox-7b57f96db7-c2f99 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- exec busybox-7b57f96db7-6qn6k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- exec busybox-7b57f96db7-6qn6k -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- exec busybox-7b57f96db7-bskk7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- exec busybox-7b57f96db7-bskk7 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- exec busybox-7b57f96db7-c2f99 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 kubectl -- exec busybox-7b57f96db7-c2f99 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 node add --alsologtostderr -v 5
E1129 08:45:23.738738    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:45:44.220252    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-243572 node add --alsologtostderr -v 5: (46.454919189s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-243572 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp testdata/cp-test.txt ha-243572:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp ha-243572:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1908511433/001/cp-test_ha-243572.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp ha-243572:/home/docker/cp-test.txt ha-243572-m02:/home/docker/cp-test_ha-243572_ha-243572-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m02 "sudo cat /home/docker/cp-test_ha-243572_ha-243572-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp ha-243572:/home/docker/cp-test.txt ha-243572-m03:/home/docker/cp-test_ha-243572_ha-243572-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m03 "sudo cat /home/docker/cp-test_ha-243572_ha-243572-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp ha-243572:/home/docker/cp-test.txt ha-243572-m04:/home/docker/cp-test_ha-243572_ha-243572-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m04 "sudo cat /home/docker/cp-test_ha-243572_ha-243572-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp testdata/cp-test.txt ha-243572-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp ha-243572-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1908511433/001/cp-test_ha-243572-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp ha-243572-m02:/home/docker/cp-test.txt ha-243572:/home/docker/cp-test_ha-243572-m02_ha-243572.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572 "sudo cat /home/docker/cp-test_ha-243572-m02_ha-243572.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp ha-243572-m02:/home/docker/cp-test.txt ha-243572-m03:/home/docker/cp-test_ha-243572-m02_ha-243572-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m03 "sudo cat /home/docker/cp-test_ha-243572-m02_ha-243572-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp ha-243572-m02:/home/docker/cp-test.txt ha-243572-m04:/home/docker/cp-test_ha-243572-m02_ha-243572-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m04 "sudo cat /home/docker/cp-test_ha-243572-m02_ha-243572-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp testdata/cp-test.txt ha-243572-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp ha-243572-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1908511433/001/cp-test_ha-243572-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp ha-243572-m03:/home/docker/cp-test.txt ha-243572:/home/docker/cp-test_ha-243572-m03_ha-243572.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572 "sudo cat /home/docker/cp-test_ha-243572-m03_ha-243572.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp ha-243572-m03:/home/docker/cp-test.txt ha-243572-m02:/home/docker/cp-test_ha-243572-m03_ha-243572-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m02 "sudo cat /home/docker/cp-test_ha-243572-m03_ha-243572-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp ha-243572-m03:/home/docker/cp-test.txt ha-243572-m04:/home/docker/cp-test_ha-243572-m03_ha-243572-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m04 "sudo cat /home/docker/cp-test_ha-243572-m03_ha-243572-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp testdata/cp-test.txt ha-243572-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp ha-243572-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1908511433/001/cp-test_ha-243572-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp ha-243572-m04:/home/docker/cp-test.txt ha-243572:/home/docker/cp-test_ha-243572-m04_ha-243572.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572 "sudo cat /home/docker/cp-test_ha-243572-m04_ha-243572.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp ha-243572-m04:/home/docker/cp-test.txt ha-243572-m02:/home/docker/cp-test_ha-243572-m04_ha-243572-m02.txt
E1129 08:46:14.290712    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m02 "sudo cat /home/docker/cp-test_ha-243572-m04_ha-243572-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 cp ha-243572-m04:/home/docker/cp-test.txt ha-243572-m03:/home/docker/cp-test_ha-243572-m04_ha-243572-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 ssh -n ha-243572-m03 "sudo cat /home/docker/cp-test_ha-243572-m04_ha-243572-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (80.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 node stop m02 --alsologtostderr -v 5
E1129 08:46:25.182571    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-243572 node stop m02 --alsologtostderr -v 5: (1m19.567364367s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-243572 status --alsologtostderr -v 5: exit status 7 (522.158985ms)

                                                
                                                
-- stdout --
	ha-243572
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-243572-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-243572-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-243572-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:47:35.076280   19493 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:47:35.076561   19493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:47:35.076571   19493 out.go:374] Setting ErrFile to fd 2...
	I1129 08:47:35.076576   19493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:47:35.076811   19493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
	I1129 08:47:35.077030   19493 out.go:368] Setting JSON to false
	I1129 08:47:35.077057   19493 mustload.go:66] Loading cluster: ha-243572
	I1129 08:47:35.077180   19493 notify.go:221] Checking for updates...
	I1129 08:47:35.077532   19493 config.go:182] Loaded profile config "ha-243572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:47:35.077553   19493 status.go:174] checking status of ha-243572 ...
	I1129 08:47:35.079637   19493 status.go:371] ha-243572 host status = "Running" (err=<nil>)
	I1129 08:47:35.079653   19493 host.go:66] Checking if "ha-243572" exists ...
	I1129 08:47:35.082455   19493 main.go:143] libmachine: domain ha-243572 has defined MAC address 52:54:00:27:44:b7 in network mk-ha-243572
	I1129 08:47:35.083050   19493 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:44:b7", ip: ""} in network mk-ha-243572: {Iface:virbr1 ExpiryTime:2025-11-29 09:41:06 +0000 UTC Type:0 Mac:52:54:00:27:44:b7 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-243572 Clientid:01:52:54:00:27:44:b7}
	I1129 08:47:35.083085   19493 main.go:143] libmachine: domain ha-243572 has defined IP address 192.168.39.210 and MAC address 52:54:00:27:44:b7 in network mk-ha-243572
	I1129 08:47:35.083233   19493 host.go:66] Checking if "ha-243572" exists ...
	I1129 08:47:35.083510   19493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:47:35.086169   19493 main.go:143] libmachine: domain ha-243572 has defined MAC address 52:54:00:27:44:b7 in network mk-ha-243572
	I1129 08:47:35.086594   19493 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:44:b7", ip: ""} in network mk-ha-243572: {Iface:virbr1 ExpiryTime:2025-11-29 09:41:06 +0000 UTC Type:0 Mac:52:54:00:27:44:b7 Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-243572 Clientid:01:52:54:00:27:44:b7}
	I1129 08:47:35.086641   19493 main.go:143] libmachine: domain ha-243572 has defined IP address 192.168.39.210 and MAC address 52:54:00:27:44:b7 in network mk-ha-243572
	I1129 08:47:35.086784   19493 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/ha-243572/id_rsa Username:docker}
	I1129 08:47:35.173327   19493 ssh_runner.go:195] Run: systemctl --version
	I1129 08:47:35.180275   19493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:47:35.203981   19493 kubeconfig.go:125] found "ha-243572" server: "https://192.168.39.254:8443"
	I1129 08:47:35.204017   19493 api_server.go:166] Checking apiserver status ...
	I1129 08:47:35.204062   19493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 08:47:35.228878   19493 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup
	W1129 08:47:35.243979   19493 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1129 08:47:35.244055   19493 ssh_runner.go:195] Run: ls
	I1129 08:47:35.250264   19493 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1129 08:47:35.255627   19493 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1129 08:47:35.255658   19493 status.go:463] ha-243572 apiserver status = Running (err=<nil>)
	I1129 08:47:35.255671   19493 status.go:176] ha-243572 status: &{Name:ha-243572 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:47:35.255690   19493 status.go:174] checking status of ha-243572-m02 ...
	I1129 08:47:35.257415   19493 status.go:371] ha-243572-m02 host status = "Stopped" (err=<nil>)
	I1129 08:47:35.257432   19493 status.go:384] host is not running, skipping remaining checks
	I1129 08:47:35.257440   19493 status.go:176] ha-243572-m02 status: &{Name:ha-243572-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:47:35.257454   19493 status.go:174] checking status of ha-243572-m03 ...
	I1129 08:47:35.258610   19493 status.go:371] ha-243572-m03 host status = "Running" (err=<nil>)
	I1129 08:47:35.258625   19493 host.go:66] Checking if "ha-243572-m03" exists ...
	I1129 08:47:35.260844   19493 main.go:143] libmachine: domain ha-243572-m03 has defined MAC address 52:54:00:aa:8c:4c in network mk-ha-243572
	I1129 08:47:35.261286   19493 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:8c:4c", ip: ""} in network mk-ha-243572: {Iface:virbr1 ExpiryTime:2025-11-29 09:43:30 +0000 UTC Type:0 Mac:52:54:00:aa:8c:4c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-243572-m03 Clientid:01:52:54:00:aa:8c:4c}
	I1129 08:47:35.261308   19493 main.go:143] libmachine: domain ha-243572-m03 has defined IP address 192.168.39.58 and MAC address 52:54:00:aa:8c:4c in network mk-ha-243572
	I1129 08:47:35.261482   19493 host.go:66] Checking if "ha-243572-m03" exists ...
	I1129 08:47:35.261711   19493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:47:35.264010   19493 main.go:143] libmachine: domain ha-243572-m03 has defined MAC address 52:54:00:aa:8c:4c in network mk-ha-243572
	I1129 08:47:35.264456   19493 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:8c:4c", ip: ""} in network mk-ha-243572: {Iface:virbr1 ExpiryTime:2025-11-29 09:43:30 +0000 UTC Type:0 Mac:52:54:00:aa:8c:4c Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:ha-243572-m03 Clientid:01:52:54:00:aa:8c:4c}
	I1129 08:47:35.264477   19493 main.go:143] libmachine: domain ha-243572-m03 has defined IP address 192.168.39.58 and MAC address 52:54:00:aa:8c:4c in network mk-ha-243572
	I1129 08:47:35.264609   19493 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/ha-243572-m03/id_rsa Username:docker}
	I1129 08:47:35.356288   19493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:47:35.376703   19493 kubeconfig.go:125] found "ha-243572" server: "https://192.168.39.254:8443"
	I1129 08:47:35.376730   19493 api_server.go:166] Checking apiserver status ...
	I1129 08:47:35.376772   19493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 08:47:35.397074   19493 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1798/cgroup
	W1129 08:47:35.410812   19493 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1798/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1129 08:47:35.410892   19493 ssh_runner.go:195] Run: ls
	I1129 08:47:35.415649   19493 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1129 08:47:35.420356   19493 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1129 08:47:35.420376   19493 status.go:463] ha-243572-m03 apiserver status = Running (err=<nil>)
	I1129 08:47:35.420384   19493 status.go:176] ha-243572-m03 status: &{Name:ha-243572-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:47:35.420397   19493 status.go:174] checking status of ha-243572-m04 ...
	I1129 08:47:35.422046   19493 status.go:371] ha-243572-m04 host status = "Running" (err=<nil>)
	I1129 08:47:35.422062   19493 host.go:66] Checking if "ha-243572-m04" exists ...
	I1129 08:47:35.424494   19493 main.go:143] libmachine: domain ha-243572-m04 has defined MAC address 52:54:00:08:fe:e2 in network mk-ha-243572
	I1129 08:47:35.424915   19493 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:08:fe:e2", ip: ""} in network mk-ha-243572: {Iface:virbr1 ExpiryTime:2025-11-29 09:45:32 +0000 UTC Type:0 Mac:52:54:00:08:fe:e2 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-243572-m04 Clientid:01:52:54:00:08:fe:e2}
	I1129 08:47:35.424938   19493 main.go:143] libmachine: domain ha-243572-m04 has defined IP address 192.168.39.16 and MAC address 52:54:00:08:fe:e2 in network mk-ha-243572
	I1129 08:47:35.425070   19493 host.go:66] Checking if "ha-243572-m04" exists ...
	I1129 08:47:35.425257   19493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 08:47:35.427516   19493 main.go:143] libmachine: domain ha-243572-m04 has defined MAC address 52:54:00:08:fe:e2 in network mk-ha-243572
	I1129 08:47:35.427891   19493 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:08:fe:e2", ip: ""} in network mk-ha-243572: {Iface:virbr1 ExpiryTime:2025-11-29 09:45:32 +0000 UTC Type:0 Mac:52:54:00:08:fe:e2 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:ha-243572-m04 Clientid:01:52:54:00:08:fe:e2}
	I1129 08:47:35.427911   19493 main.go:143] libmachine: domain ha-243572-m04 has defined IP address 192.168.39.16 and MAC address 52:54:00:08:fe:e2 in network mk-ha-243572
	I1129 08:47:35.428040   19493 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/ha-243572-m04/id_rsa Username:docker}
	I1129 08:47:35.515180   19493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 08:47:35.534415   19493 status.go:176] ha-243572-m04 status: &{Name:ha-243572-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (80.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 node start m02 --alsologtostderr -v 5
E1129 08:47:47.106027    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-243572 node start m02 --alsologtostderr -v 5: (33.197348274s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (367.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 stop --alsologtostderr -v 5
E1129 08:50:03.245047    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:50:30.947775    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:51:14.291228    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-243572 stop --alsologtostderr -v 5: (4m3.504404627s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 start --wait true --alsologtostderr -v 5
E1129 08:52:37.361906    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-243572 start --wait true --alsologtostderr -v 5: (2m4.141198105s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (367.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-243572 node delete m03 --alsologtostderr -v 5: (17.652079895s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (250.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 stop --alsologtostderr -v 5
E1129 08:55:03.242195    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 08:56:14.290688    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-243572 stop --alsologtostderr -v 5: (4m10.093430983s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-243572 status --alsologtostderr -v 5: exit status 7 (65.710811ms)

                                                
                                                
-- stdout --
	ha-243572
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-243572-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-243572-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 08:58:47.746555   22695 out.go:360] Setting OutFile to fd 1 ...
	I1129 08:58:47.746671   22695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:58:47.746679   22695 out.go:374] Setting ErrFile to fd 2...
	I1129 08:58:47.746684   22695 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 08:58:47.746944   22695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
	I1129 08:58:47.747122   22695 out.go:368] Setting JSON to false
	I1129 08:58:47.747149   22695 mustload.go:66] Loading cluster: ha-243572
	I1129 08:58:47.747219   22695 notify.go:221] Checking for updates...
	I1129 08:58:47.747601   22695 config.go:182] Loaded profile config "ha-243572": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 08:58:47.747622   22695 status.go:174] checking status of ha-243572 ...
	I1129 08:58:47.749843   22695 status.go:371] ha-243572 host status = "Stopped" (err=<nil>)
	I1129 08:58:47.749859   22695 status.go:384] host is not running, skipping remaining checks
	I1129 08:58:47.749864   22695 status.go:176] ha-243572 status: &{Name:ha-243572 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:58:47.749882   22695 status.go:174] checking status of ha-243572-m02 ...
	I1129 08:58:47.751182   22695 status.go:371] ha-243572-m02 host status = "Stopped" (err=<nil>)
	I1129 08:58:47.751196   22695 status.go:384] host is not running, skipping remaining checks
	I1129 08:58:47.751201   22695 status.go:176] ha-243572-m02 status: &{Name:ha-243572-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 08:58:47.751250   22695 status.go:174] checking status of ha-243572-m04 ...
	I1129 08:58:47.752444   22695 status.go:371] ha-243572-m04 host status = "Stopped" (err=<nil>)
	I1129 08:58:47.752480   22695 status.go:384] host is not running, skipping remaining checks
	I1129 08:58:47.752484   22695 status.go:176] ha-243572-m04 status: &{Name:ha-243572-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (250.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (92.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1129 09:00:03.242354    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-243572 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m32.150971697s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (92.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (105.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 node add --control-plane --alsologtostderr -v 5
E1129 09:01:14.291224    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:01:26.313237    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-243572 node add --control-plane --alsologtostderr -v 5: (1m45.273388525s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-243572 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (105.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.6s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-892632 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-892632 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m18.594600072s)
--- PASS: TestJSONOutput/start/Command (78.60s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-892632 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-892632 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-892632 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-892632 --output=json --user=testUser: (6.935499557s)
--- PASS: TestJSONOutput/stop/Command (6.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-135576 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-135576 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (73.323704ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"daadb064-4789-478d-8e8c-ccb638e435d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-135576] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1bc3fab5-5bec-442a-8018-0317e8866687","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22000"}}
	{"specversion":"1.0","id":"9887a49f-2731-4699-8e26-8e2c3fb8cdf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"18969fcc-bf57-41d2-b210-7042c3c9eeb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig"}}
	{"specversion":"1.0","id":"afd26100-f1d7-43cf-a878-062893b88e4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube"}}
	{"specversion":"1.0","id":"6bae6b3f-8ee1-4488-8027-ddba473a22a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2f4af63e-06a9-4f55-af85-7876c7fefa35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7adece67-4898-49ca-ba2d-86a6792e772f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-135576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-135576
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (77.76s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-847153 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-847153 --driver=kvm2  --container-runtime=crio: (38.789533894s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-850222 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-850222 --driver=kvm2  --container-runtime=crio: (36.323069916s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-847153
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-850222
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-850222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-850222
helpers_test.go:175: Cleaning up "first-847153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-847153
--- PASS: TestMinikubeProfile (77.76s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-984292 --memory=3072 --mount-string /tmp/TestMountStartserial11783259/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1129 09:05:03.241963    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-984292 --memory=3072 --mount-string /tmp/TestMountStartserial11783259/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (21.320753401s)
--- PASS: TestMountStart/serial/StartWithMountFirst (22.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-984292 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-984292 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (19.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-002020 --memory=3072 --mount-string /tmp/TestMountStartserial11783259/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-002020 --memory=3072 --mount-string /tmp/TestMountStartserial11783259/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.984854122s)
--- PASS: TestMountStart/serial/StartWithMountSecond (19.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-002020 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-002020 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-984292 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-002020 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-002020 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-002020
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-002020: (1.280670568s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.65s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-002020
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-002020: (18.649709349s)
--- PASS: TestMountStart/serial/RestartStopped (19.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-002020 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-002020 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (125.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-446803 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1129 09:06:14.290979    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-446803 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m4.797421352s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (125.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-446803 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-446803 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-446803 -- rollout status deployment/busybox: (4.63546054s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-446803 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-446803 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-446803 -- exec busybox-7b57f96db7-qpbwj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-446803 -- exec busybox-7b57f96db7-vdlkd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-446803 -- exec busybox-7b57f96db7-qpbwj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-446803 -- exec busybox-7b57f96db7-vdlkd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-446803 -- exec busybox-7b57f96db7-qpbwj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-446803 -- exec busybox-7b57f96db7-vdlkd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-446803 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-446803 -- exec busybox-7b57f96db7-qpbwj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-446803 -- exec busybox-7b57f96db7-qpbwj -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-446803 -- exec busybox-7b57f96db7-vdlkd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-446803 -- exec busybox-7b57f96db7-vdlkd -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-446803 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-446803 -v=5 --alsologtostderr: (42.201826568s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.66s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-446803 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 cp testdata/cp-test.txt multinode-446803:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 cp multinode-446803:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1541506731/001/cp-test_multinode-446803.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 cp multinode-446803:/home/docker/cp-test.txt multinode-446803-m02:/home/docker/cp-test_multinode-446803_multinode-446803-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803-m02 "sudo cat /home/docker/cp-test_multinode-446803_multinode-446803-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 cp multinode-446803:/home/docker/cp-test.txt multinode-446803-m03:/home/docker/cp-test_multinode-446803_multinode-446803-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803-m03 "sudo cat /home/docker/cp-test_multinode-446803_multinode-446803-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 cp testdata/cp-test.txt multinode-446803-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 cp multinode-446803-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1541506731/001/cp-test_multinode-446803-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 cp multinode-446803-m02:/home/docker/cp-test.txt multinode-446803:/home/docker/cp-test_multinode-446803-m02_multinode-446803.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803 "sudo cat /home/docker/cp-test_multinode-446803-m02_multinode-446803.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 cp multinode-446803-m02:/home/docker/cp-test.txt multinode-446803-m03:/home/docker/cp-test_multinode-446803-m02_multinode-446803-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803-m03 "sudo cat /home/docker/cp-test_multinode-446803-m02_multinode-446803-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 cp testdata/cp-test.txt multinode-446803-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 cp multinode-446803-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1541506731/001/cp-test_multinode-446803-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 cp multinode-446803-m03:/home/docker/cp-test.txt multinode-446803:/home/docker/cp-test_multinode-446803-m03_multinode-446803.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803 "sudo cat /home/docker/cp-test_multinode-446803-m03_multinode-446803.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 cp multinode-446803-m03:/home/docker/cp-test.txt multinode-446803-m02:/home/docker/cp-test_multinode-446803-m03_multinode-446803-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 ssh -n multinode-446803-m02 "sudo cat /home/docker/cp-test_multinode-446803-m03_multinode-446803-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-446803 node stop m03: (1.566859251s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-446803 status: exit status 7 (327.636255ms)

                                                
                                                
-- stdout --
	multinode-446803
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-446803-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-446803-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-446803 status --alsologtostderr: exit status 7 (328.385347ms)

                                                
                                                
-- stdout --
	multinode-446803
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-446803-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-446803-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:09:05.803217   28445 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:09:05.803441   28445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:09:05.803449   28445 out.go:374] Setting ErrFile to fd 2...
	I1129 09:09:05.803453   28445 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:09:05.803651   28445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
	I1129 09:09:05.803800   28445 out.go:368] Setting JSON to false
	I1129 09:09:05.803822   28445 mustload.go:66] Loading cluster: multinode-446803
	I1129 09:09:05.803951   28445 notify.go:221] Checking for updates...
	I1129 09:09:05.804169   28445 config.go:182] Loaded profile config "multinode-446803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:09:05.804184   28445 status.go:174] checking status of multinode-446803 ...
	I1129 09:09:05.806355   28445 status.go:371] multinode-446803 host status = "Running" (err=<nil>)
	I1129 09:09:05.806370   28445 host.go:66] Checking if "multinode-446803" exists ...
	I1129 09:09:05.808738   28445 main.go:143] libmachine: domain multinode-446803 has defined MAC address 52:54:00:b7:aa:35 in network mk-multinode-446803
	I1129 09:09:05.809142   28445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:aa:35", ip: ""} in network mk-multinode-446803: {Iface:virbr1 ExpiryTime:2025-11-29 10:06:17 +0000 UTC Type:0 Mac:52:54:00:b7:aa:35 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:multinode-446803 Clientid:01:52:54:00:b7:aa:35}
	I1129 09:09:05.809177   28445 main.go:143] libmachine: domain multinode-446803 has defined IP address 192.168.39.74 and MAC address 52:54:00:b7:aa:35 in network mk-multinode-446803
	I1129 09:09:05.809305   28445 host.go:66] Checking if "multinode-446803" exists ...
	I1129 09:09:05.809490   28445 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:09:05.811919   28445 main.go:143] libmachine: domain multinode-446803 has defined MAC address 52:54:00:b7:aa:35 in network mk-multinode-446803
	I1129 09:09:05.812347   28445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b7:aa:35", ip: ""} in network mk-multinode-446803: {Iface:virbr1 ExpiryTime:2025-11-29 10:06:17 +0000 UTC Type:0 Mac:52:54:00:b7:aa:35 Iaid: IPaddr:192.168.39.74 Prefix:24 Hostname:multinode-446803 Clientid:01:52:54:00:b7:aa:35}
	I1129 09:09:05.812376   28445 main.go:143] libmachine: domain multinode-446803 has defined IP address 192.168.39.74 and MAC address 52:54:00:b7:aa:35 in network mk-multinode-446803
	I1129 09:09:05.812559   28445 sshutil.go:53] new ssh client: &{IP:192.168.39.74 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/multinode-446803/id_rsa Username:docker}
	I1129 09:09:05.897258   28445 ssh_runner.go:195] Run: systemctl --version
	I1129 09:09:05.903381   28445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:09:05.921008   28445 kubeconfig.go:125] found "multinode-446803" server: "https://192.168.39.74:8443"
	I1129 09:09:05.921042   28445 api_server.go:166] Checking apiserver status ...
	I1129 09:09:05.921076   28445 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1129 09:09:05.940557   28445 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1384/cgroup
	W1129 09:09:05.954635   28445 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1384/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1129 09:09:05.954716   28445 ssh_runner.go:195] Run: ls
	I1129 09:09:05.960439   28445 api_server.go:253] Checking apiserver healthz at https://192.168.39.74:8443/healthz ...
	I1129 09:09:05.966396   28445 api_server.go:279] https://192.168.39.74:8443/healthz returned 200:
	ok
	I1129 09:09:05.966422   28445 status.go:463] multinode-446803 apiserver status = Running (err=<nil>)
	I1129 09:09:05.966434   28445 status.go:176] multinode-446803 status: &{Name:multinode-446803 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 09:09:05.966469   28445 status.go:174] checking status of multinode-446803-m02 ...
	I1129 09:09:05.968015   28445 status.go:371] multinode-446803-m02 host status = "Running" (err=<nil>)
	I1129 09:09:05.968037   28445 host.go:66] Checking if "multinode-446803-m02" exists ...
	I1129 09:09:05.970940   28445 main.go:143] libmachine: domain multinode-446803-m02 has defined MAC address 52:54:00:a3:dc:5b in network mk-multinode-446803
	I1129 09:09:05.971333   28445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a3:dc:5b", ip: ""} in network mk-multinode-446803: {Iface:virbr1 ExpiryTime:2025-11-29 10:07:39 +0000 UTC Type:0 Mac:52:54:00:a3:dc:5b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-446803-m02 Clientid:01:52:54:00:a3:dc:5b}
	I1129 09:09:05.971366   28445 main.go:143] libmachine: domain multinode-446803-m02 has defined IP address 192.168.39.30 and MAC address 52:54:00:a3:dc:5b in network mk-multinode-446803
	I1129 09:09:05.971534   28445 host.go:66] Checking if "multinode-446803-m02" exists ...
	I1129 09:09:05.971851   28445 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1129 09:09:05.974581   28445 main.go:143] libmachine: domain multinode-446803-m02 has defined MAC address 52:54:00:a3:dc:5b in network mk-multinode-446803
	I1129 09:09:05.974960   28445 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a3:dc:5b", ip: ""} in network mk-multinode-446803: {Iface:virbr1 ExpiryTime:2025-11-29 10:07:39 +0000 UTC Type:0 Mac:52:54:00:a3:dc:5b Iaid: IPaddr:192.168.39.30 Prefix:24 Hostname:multinode-446803-m02 Clientid:01:52:54:00:a3:dc:5b}
	I1129 09:09:05.974987   28445 main.go:143] libmachine: domain multinode-446803-m02 has defined IP address 192.168.39.30 and MAC address 52:54:00:a3:dc:5b in network mk-multinode-446803
	I1129 09:09:05.975118   28445 sshutil.go:53] new ssh client: &{IP:192.168.39.30 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22000-5651/.minikube/machines/multinode-446803-m02/id_rsa Username:docker}
	I1129 09:09:06.054461   28445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1129 09:09:06.070818   28445 status.go:176] multinode-446803-m02 status: &{Name:multinode-446803-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1129 09:09:06.070889   28445 status.go:174] checking status of multinode-446803-m03 ...
	I1129 09:09:06.072537   28445 status.go:371] multinode-446803-m03 host status = "Stopped" (err=<nil>)
	I1129 09:09:06.072553   28445 status.go:384] host is not running, skipping remaining checks
	I1129 09:09:06.072557   28445 status.go:176] multinode-446803-m03 status: &{Name:multinode-446803-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 node start m03 -v=5 --alsologtostderr
E1129 09:09:17.365496    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-446803 node start m03 -v=5 --alsologtostderr: (39.92828742s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (292.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-446803
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-446803
E1129 09:10:03.250641    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:11:14.290843    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-446803: (2m50.887126995s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-446803 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-446803 --wait=true -v=5 --alsologtostderr: (2m1.220241208s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-446803
--- PASS: TestMultiNode/serial/RestartKeepsNodes (292.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-446803 node delete m03: (2.182309998s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (179.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 stop
E1129 09:15:03.241862    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:16:14.290374    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-446803 stop: (2m59.456760531s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-446803 status: exit status 7 (64.47954ms)

                                                
                                                
-- stdout --
	multinode-446803
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-446803-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-446803 status --alsologtostderr: exit status 7 (62.632371ms)

                                                
                                                
-- stdout --
	multinode-446803
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-446803-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:17:40.990283   30839 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:17:40.990515   30839 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:40.990523   30839 out.go:374] Setting ErrFile to fd 2...
	I1129 09:17:40.990528   30839 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:17:40.990699   30839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
	I1129 09:17:40.990899   30839 out.go:368] Setting JSON to false
	I1129 09:17:40.990926   30839 mustload.go:66] Loading cluster: multinode-446803
	I1129 09:17:40.991133   30839 notify.go:221] Checking for updates...
	I1129 09:17:40.991249   30839 config.go:182] Loaded profile config "multinode-446803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:17:40.991263   30839 status.go:174] checking status of multinode-446803 ...
	I1129 09:17:40.993489   30839 status.go:371] multinode-446803 host status = "Stopped" (err=<nil>)
	I1129 09:17:40.993505   30839 status.go:384] host is not running, skipping remaining checks
	I1129 09:17:40.993510   30839 status.go:176] multinode-446803 status: &{Name:multinode-446803 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1129 09:17:40.993551   30839 status.go:174] checking status of multinode-446803-m02 ...
	I1129 09:17:40.994815   30839 status.go:371] multinode-446803-m02 host status = "Stopped" (err=<nil>)
	I1129 09:17:40.994841   30839 status.go:384] host is not running, skipping remaining checks
	I1129 09:17:40.994848   30839 status.go:176] multinode-446803-m02 status: &{Name:multinode-446803-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (179.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (85.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-446803 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1129 09:18:06.315367    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-446803 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m25.178163464s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-446803 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (85.65s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-446803
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-446803-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-446803-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (83.687046ms)

                                                
                                                
-- stdout --
	* [multinode-446803-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-446803-m02' is duplicated with machine name 'multinode-446803-m02' in profile 'multinode-446803'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-446803-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-446803-m03 --driver=kvm2  --container-runtime=crio: (37.220808875s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-446803
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-446803: exit status 80 (203.449479ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-446803 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-446803-m03 already exists in multinode-446803-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-446803-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.42s)

                                                
                                    
x
+
TestScheduledStopUnix (107.84s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-324527 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-324527 --memory=3072 --driver=kvm2  --container-runtime=crio: (36.220407735s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-324527 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1129 09:22:56.577144   33543 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:22:56.577283   33543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:22:56.577295   33543 out.go:374] Setting ErrFile to fd 2...
	I1129 09:22:56.577301   33543 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:22:56.577490   33543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
	I1129 09:22:56.577782   33543 out.go:368] Setting JSON to false
	I1129 09:22:56.577924   33543 mustload.go:66] Loading cluster: scheduled-stop-324527
	I1129 09:22:56.578387   33543 config.go:182] Loaded profile config "scheduled-stop-324527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:22:56.578489   33543 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/config.json ...
	I1129 09:22:56.578754   33543 mustload.go:66] Loading cluster: scheduled-stop-324527
	I1129 09:22:56.578916   33543 config.go:182] Loaded profile config "scheduled-stop-324527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-324527 -n scheduled-stop-324527
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-324527 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1129 09:22:56.866865   33588 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:22:56.867124   33588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:22:56.867133   33588 out.go:374] Setting ErrFile to fd 2...
	I1129 09:22:56.867137   33588 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:22:56.867391   33588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
	I1129 09:22:56.867686   33588 out.go:368] Setting JSON to false
	I1129 09:22:56.867959   33588 daemonize_unix.go:73] killing process 33578 as it is an old scheduled stop
	I1129 09:22:56.868061   33588 mustload.go:66] Loading cluster: scheduled-stop-324527
	I1129 09:22:56.868395   33588 config.go:182] Loaded profile config "scheduled-stop-324527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:22:56.868478   33588 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/config.json ...
	I1129 09:22:56.868671   33588 mustload.go:66] Loading cluster: scheduled-stop-324527
	I1129 09:22:56.868796   33588 config.go:182] Loaded profile config "scheduled-stop-324527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1129 09:22:56.873650    9613 retry.go:31] will retry after 61.751µs: open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/pid: no such file or directory
I1129 09:22:56.874787    9613 retry.go:31] will retry after 158.283µs: open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/pid: no such file or directory
I1129 09:22:56.875886    9613 retry.go:31] will retry after 236.146µs: open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/pid: no such file or directory
I1129 09:22:56.877043    9613 retry.go:31] will retry after 324.255µs: open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/pid: no such file or directory
I1129 09:22:56.878174    9613 retry.go:31] will retry after 544.866µs: open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/pid: no such file or directory
I1129 09:22:56.879298    9613 retry.go:31] will retry after 418.832µs: open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/pid: no such file or directory
I1129 09:22:56.880435    9613 retry.go:31] will retry after 605.384µs: open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/pid: no such file or directory
I1129 09:22:56.881585    9613 retry.go:31] will retry after 1.120195ms: open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/pid: no such file or directory
I1129 09:22:56.883823    9613 retry.go:31] will retry after 3.481256ms: open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/pid: no such file or directory
I1129 09:22:56.888094    9613 retry.go:31] will retry after 3.847094ms: open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/pid: no such file or directory
I1129 09:22:56.892328    9613 retry.go:31] will retry after 3.830675ms: open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/pid: no such file or directory
I1129 09:22:56.896571    9613 retry.go:31] will retry after 9.115834ms: open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/pid: no such file or directory
I1129 09:22:56.906822    9613 retry.go:31] will retry after 12.863201ms: open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/pid: no such file or directory
I1129 09:22:56.920166    9613 retry.go:31] will retry after 22.821401ms: open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/pid: no such file or directory
I1129 09:22:56.943453    9613 retry.go:31] will retry after 34.942874ms: open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/pid: no such file or directory
I1129 09:22:56.978747    9613 retry.go:31] will retry after 29.187318ms: open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-324527 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-324527 -n scheduled-stop-324527
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-324527
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-324527 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1129 09:23:22.563586   33736 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:23:22.563819   33736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:23:22.563839   33736 out.go:374] Setting ErrFile to fd 2...
	I1129 09:23:22.563843   33736 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:23:22.564030   33736 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
	I1129 09:23:22.564246   33736 out.go:368] Setting JSON to false
	I1129 09:23:22.564317   33736 mustload.go:66] Loading cluster: scheduled-stop-324527
	I1129 09:23:22.564623   33736 config.go:182] Loaded profile config "scheduled-stop-324527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:23:22.564685   33736 profile.go:143] Saving config to /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/scheduled-stop-324527/config.json ...
	I1129 09:23:22.564910   33736 mustload.go:66] Loading cluster: scheduled-stop-324527
	I1129 09:23:22.565018   33736 config.go:182] Loaded profile config "scheduled-stop-324527": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-324527
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-324527: exit status 7 (61.125542ms)

                                                
                                                
-- stdout --
	scheduled-stop-324527
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-324527 -n scheduled-stop-324527
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-324527 -n scheduled-stop-324527: exit status 7 (60.83854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-324527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-324527
--- PASS: TestScheduledStopUnix (107.84s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (392.88s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.4183464755 start -p running-upgrade-501515 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.4183464755 start -p running-upgrade-501515 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m30.568233184s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-501515 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-501515 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (4m58.148498435s)
helpers_test.go:175: Cleaning up "running-upgrade-501515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-501515
--- PASS: TestRunningBinaryUpgrade (392.88s)

                                                
                                    
x
+
TestKubernetesUpgrade (153.58s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-553896 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-553896 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.113074256s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-553896
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-553896: (1.998728478s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-553896 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-553896 status --format={{.Host}}: exit status 7 (83.588958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-553896 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1129 09:25:03.242479    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-553896 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.340965791s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-553896 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-553896 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-553896 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (95.743522ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-553896] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-553896
	    minikube start -p kubernetes-upgrade-553896 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5538962 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-553896 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-553896 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1129 09:25:57.367260    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-553896 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.889837211s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-553896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-553896
--- PASS: TestKubernetesUpgrade (153.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-371904 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-371904 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (96.496999ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-371904] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (78.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-371904 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-371904 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m18.180181969s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-371904 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (78.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (24.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-371904 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-371904 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (23.050877143s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-371904 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-371904 status -o json: exit status 2 (198.968531ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-371904","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-371904
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (24.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-371904 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-371904 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (28.185270852s)
--- PASS: TestNoKubernetes/serial/Start (28.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-473168 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-473168 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (142.859158ms)

                                                
                                                
-- stdout --
	* [false-473168] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22000
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1129 09:25:58.267413   36218 out.go:360] Setting OutFile to fd 1 ...
	I1129 09:25:58.267667   36218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:25:58.267677   36218 out.go:374] Setting ErrFile to fd 2...
	I1129 09:25:58.267682   36218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1129 09:25:58.267911   36218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22000-5651/.minikube/bin
	I1129 09:25:58.268376   36218 out.go:368] Setting JSON to false
	I1129 09:25:58.269283   36218 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4102,"bootTime":1764404256,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1129 09:25:58.269341   36218 start.go:143] virtualization: kvm guest
	I1129 09:25:58.272092   36218 out.go:179] * [false-473168] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1129 09:25:58.273436   36218 notify.go:221] Checking for updates...
	I1129 09:25:58.273461   36218 out.go:179]   - MINIKUBE_LOCATION=22000
	I1129 09:25:58.275736   36218 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1129 09:25:58.277146   36218 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22000-5651/kubeconfig
	I1129 09:25:58.278507   36218 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22000-5651/.minikube
	I1129 09:25:58.279969   36218 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1129 09:25:58.281430   36218 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1129 09:25:58.283445   36218 config.go:182] Loaded profile config "NoKubernetes-371904": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1129 09:25:58.283623   36218 config.go:182] Loaded profile config "kubernetes-upgrade-553896": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1129 09:25:58.283766   36218 config.go:182] Loaded profile config "running-upgrade-501515": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1129 09:25:58.283918   36218 driver.go:422] Setting default libvirt URI to qemu:///system
	I1129 09:25:58.326766   36218 out.go:179] * Using the kvm2 driver based on user configuration
	I1129 09:25:58.327917   36218 start.go:309] selected driver: kvm2
	I1129 09:25:58.327931   36218 start.go:927] validating driver "kvm2" against <nil>
	I1129 09:25:58.327942   36218 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1129 09:25:58.329758   36218 out.go:203] 
	W1129 09:25:58.330944   36218 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1129 09:25:58.332122   36218 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-473168 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-473168

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-473168

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-473168

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-473168

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-473168

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-473168

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-473168

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-473168

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-473168

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-473168

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-473168

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-473168" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-473168" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:25:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.10:8443
name: kubernetes-upgrade-553896
contexts:
- context:
cluster: kubernetes-upgrade-553896
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:25:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-553896
name: kubernetes-upgrade-553896
current-context: kubernetes-upgrade-553896
kind: Config
users:
- name: kubernetes-upgrade-553896
user:
client-certificate: /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kubernetes-upgrade-553896/client.crt
client-key: /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kubernetes-upgrade-553896/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-473168

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473168"

                                                
                                                
----------------------- debugLogs end: false-473168 [took: 3.9443388s] --------------------------------
helpers_test.go:175: Cleaning up "false-473168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-473168
--- PASS: TestNetworkPlugins/group/false (4.27s)

                                                
                                    
x
+
TestISOImage/Setup (36.21s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-872325 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1129 09:26:14.290611    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-872325 --no-kubernetes --driver=kvm2  --container-runtime=crio: (36.209937022s)
--- PASS: TestISOImage/Setup (36.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22000-5651/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-371904 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-371904 "sudo systemctl is-active --quiet service kubelet": exit status 1 (149.60607ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (19.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (16.020122853s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.727273851s)
--- PASS: TestNoKubernetes/serial/ProfileList (19.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-371904
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-371904: (1.812261626s)
--- PASS: TestNoKubernetes/serial/Stop (1.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (18.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-371904 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-371904 --driver=kvm2  --container-runtime=crio: (18.170508403s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (18.17s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.21s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.21s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.27s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.27s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-371904 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-371904 "sudo systemctl is-active --quiet service kubelet": exit status 1 (173.180703ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestPause/serial/Start (100.44s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-893760 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-893760 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m40.439995827s)
--- PASS: TestPause/serial/Start (100.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (73.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2171686991 start -p stopped-upgrade-044628 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2171686991 start -p stopped-upgrade-044628 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (41.110303105s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2171686991 -p stopped-upgrade-044628 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2171686991 -p stopped-upgrade-044628 stop: (1.778879047s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-044628 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-044628 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (31.029558383s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (73.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-044628
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-044628: (1.263869781s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (79.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-473168 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E1129 09:30:03.242576    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-473168 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m19.694501417s)
--- PASS: TestNetworkPlugins/group/auto/Start (79.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-473168 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-473168 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m24.29171394s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (83.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-473168 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1129 09:31:14.291041    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-473168 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m23.706624343s)
--- PASS: TestNetworkPlugins/group/flannel/Start (83.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-473168 "pgrep -a kubelet"
I1129 09:31:18.863191    9613 config.go:182] Loaded profile config "auto-473168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-473168 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-h55fz" [e86b1c88-b959-4b24-9e3c-a7b480f2aea6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-h55fz" [e86b1c88-b959-4b24-9e3c-a7b480f2aea6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004670222s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-473168 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-473168 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-473168 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (83.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-473168 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-473168 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m23.149702103s)
--- PASS: TestNetworkPlugins/group/bridge/Start (83.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (100.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-473168 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-473168 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m40.095138843s)
--- PASS: TestNetworkPlugins/group/calico/Start (100.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-473168 "pgrep -a kubelet"
I1129 09:31:56.433721    9613 config.go:182] Loaded profile config "enable-default-cni-473168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-473168 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-w2t4g" [b09d61d0-11a9-4ac8-9dc1-9edd1b70eea1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-w2t4g" [b09d61d0-11a9-4ac8-9dc1-9edd1b70eea1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.006010078s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-ljc8k" [c2d07015-ac56-42a2-9e4c-8bb1f6051c8f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0049706s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-473168 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-473168 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-473168 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-473168 "pgrep -a kubelet"
I1129 09:32:11.199177    9613 config.go:182] Loaded profile config "flannel-473168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-473168 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qxpc8" [71881ec2-d7a6-4f32-b45e-22c4a665bc4a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qxpc8" [71881ec2-d7a6-4f32-b45e-22c4a665bc4a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004715007s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (77.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-473168 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-473168 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m17.290135352s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (77.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-473168 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-473168 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-473168 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-473168 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-473168 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m10.007686147s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-473168 "pgrep -a kubelet"
I1129 09:33:08.258760    9613 config.go:182] Loaded profile config "bridge-473168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-473168 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4czqs" [5f52d144-2f74-4a8f-9222-f82a9e4d47b4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4czqs" [5f52d144-2f74-4a8f-9222-f82a9e4d47b4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004672236s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-473168 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-473168 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-473168 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-7xz4g" [ea1cb89b-e229-4275-ba09-84aca4e9c8e7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004711206s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-473168 "pgrep -a kubelet"
I1129 09:33:33.669792    9613 config.go:182] Loaded profile config "calico-473168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-473168 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bkpvk" [a13aa3fa-32d4-4017-98df-3442f8c8b50d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bkpvk" [a13aa3fa-32d4-4017-98df-3442f8c8b50d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005071958s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (98.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-928169 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-928169 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m38.519025541s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (98.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-473168 "pgrep -a kubelet"
I1129 09:33:39.935081    9613 config.go:182] Loaded profile config "custom-flannel-473168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (17.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-473168 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ltv47" [ff5d268e-2cf5-4fd5-b0b9-accb9f8fd45f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ltv47" [ff5d268e-2cf5-4fd5-b0b9-accb9f8fd45f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 17.004407262s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (17.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-473168 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-473168 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-473168 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-pntph" [8bb61b4e-6e49-4638-8e56-654608dffaf5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005304764s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-473168 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-473168 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-473168 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-473168 "pgrep -a kubelet"
I1129 09:33:57.892941    9613 config.go:182] Loaded profile config "kindnet-473168": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-473168 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6dxw8" [d500e493-7000-4cdf-b787-9a3f65f3ea4f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6dxw8" [d500e493-7000-4cdf-b787-9a3f65f3ea4f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.007270895s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (110.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-048081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-048081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m50.294386263s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (110.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-473168 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-473168 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-473168 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (99.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-199935 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-199935 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m39.028976898s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (99.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (102.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-773049 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1129 09:34:46.316796    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:35:03.242707    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/functional-180687/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-773049 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m42.613999392s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (102.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-928169 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [84be253f-7706-4ef9-9993-b4cd278045df] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [84be253f-7706-4ef9-9993-b4cd278045df] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.00421268s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-928169 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-928169 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-928169 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.108305501s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-928169 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (89.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-928169 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-928169 --alsologtostderr -v=3: (1m29.223583704s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (89.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-048081 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3545cafe-9c82-4f12-b528-fa44733992e2] Pending
helpers_test.go:352: "busybox" [3545cafe-9c82-4f12-b528-fa44733992e2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3545cafe-9c82-4f12-b528-fa44733992e2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.005191096s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-048081 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-199935 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [693e7ed4-c021-4965-975c-6a576e720efe] Pending
helpers_test.go:352: "busybox" [693e7ed4-c021-4965-975c-6a576e720efe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [693e7ed4-c021-4965-975c-6a576e720efe] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003543093s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-199935 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-048081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-048081 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (72.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-048081 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-048081 --alsologtostderr -v=3: (1m12.502374558s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (72.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-199935 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-199935 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (88.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-199935 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-199935 --alsologtostderr -v=3: (1m28.190134198s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (88.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-773049 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f31007bc-484d-4d40-a888-a23e5cb1fb41] Pending
helpers_test.go:352: "busybox" [f31007bc-484d-4d40-a888-a23e5cb1fb41] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f31007bc-484d-4d40-a888-a23e5cb1fb41] Running
E1129 09:36:14.290497    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/addons-213983/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003364639s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-773049 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-773049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-773049 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (84.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-773049 --alsologtostderr -v=3
E1129 09:36:19.131360    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:19.137767    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:19.149163    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:19.170609    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:19.212064    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:19.293571    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:19.455207    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:19.776580    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:20.418436    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:21.700554    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:24.262868    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:29.384238    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:39.626543    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:56.670262    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/enable-default-cni-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:56.676756    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/enable-default-cni-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:56.688146    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/enable-default-cni-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:56.709554    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/enable-default-cni-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:56.751024    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/enable-default-cni-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:56.832521    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/enable-default-cni-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:56.994060    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/enable-default-cni-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:57.315822    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/enable-default-cni-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:36:57.958088    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/enable-default-cni-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-773049 --alsologtostderr -v=3: (1m24.512669869s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (84.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-928169 -n old-k8s-version-928169
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-928169 -n old-k8s-version-928169: exit status 7 (61.253628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-928169 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-928169 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1129 09:36:59.239563    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/enable-default-cni-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:37:00.108483    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:37:01.800931    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/enable-default-cni-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:37:04.733876    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:37:04.740264    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:37:04.751654    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:37:04.773164    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:37:04.814585    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:37:04.896129    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:37:05.057694    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:37:05.379463    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:37:06.021613    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:37:06.922435    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/enable-default-cni-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:37:07.303249    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:37:09.865514    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:37:14.986905    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-928169 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (44.455686478s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-928169 -n old-k8s-version-928169
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-048081 -n no-preload-048081
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-048081 -n no-preload-048081: exit status 7 (81.795436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-048081 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (60.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-048081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1129 09:37:17.164808    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/enable-default-cni-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:37:25.228945    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-048081 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (59.98675137s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-048081 -n no-preload-048081
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (60.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-199935 -n embed-certs-199935
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-199935 -n embed-certs-199935: exit status 7 (68.709224ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-199935 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-199935 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1129 09:37:37.646231    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/enable-default-cni-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:37:41.070330    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-199935 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (45.207530682s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-199935 -n embed-certs-199935
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-hgvk9" [b1338797-e2ad-486d-a03d-12012919262c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-hgvk9" [b1338797-e2ad-486d-a03d-12012919262c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.005362629s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-773049 -n default-k8s-diff-port-773049
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-773049 -n default-k8s-diff-port-773049: exit status 7 (71.315667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-773049 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-773049 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1129 09:37:45.711170    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-773049 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (56.329511937s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-773049 -n default-k8s-diff-port-773049
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-hgvk9" [b1338797-e2ad-486d-a03d-12012919262c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00450853s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-928169 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-928169 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-928169 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-928169 --alsologtostderr -v=1: (1.629650454s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-928169 -n old-k8s-version-928169
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-928169 -n old-k8s-version-928169: exit status 2 (267.657314ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-928169 -n old-k8s-version-928169
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-928169 -n old-k8s-version-928169: exit status 2 (271.585723ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-928169 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-928169 -n old-k8s-version-928169
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-928169 -n old-k8s-version-928169
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (56.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-671944 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1129 09:38:08.556089    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/bridge-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:08.562940    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/bridge-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:08.574397    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/bridge-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:08.595881    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/bridge-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:08.637389    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/bridge-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:08.718932    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/bridge-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:08.880534    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/bridge-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:09.201904    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/bridge-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:09.844248    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/bridge-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:11.126005    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/bridge-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:13.687603    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/bridge-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-671944 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (56.961752289s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (56.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (20.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l2mp7" [e9e017e6-c6c0-4d9d-92ad-1c8e57a80f1e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1129 09:38:18.608506    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/enable-default-cni-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:18.811004    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/bridge-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l2mp7" [e9e017e6-c6c0-4d9d-92ad-1c8e57a80f1e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 20.005769033s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (20.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xlhvd" [2608333f-bd78-43b8-98bf-4f1ac6709328] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1129 09:38:26.673540    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:27.471876    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/calico-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:27.479062    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/calico-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:27.490509    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/calico-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:27.512336    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/calico-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:27.553822    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/calico-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:27.635369    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/calico-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:27.797289    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/calico-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:28.118672    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/calico-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:28.760624    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/calico-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:29.052474    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/bridge-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:30.042291    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/calico-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xlhvd" [2608333f-bd78-43b8-98bf-4f1ac6709328] Running
E1129 09:38:32.604472    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/calico-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.005994365s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l2mp7" [e9e017e6-c6c0-4d9d-92ad-1c8e57a80f1e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009075679s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-048081 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E1129 09:38:41.513358    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/custom-flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xlhvd" [2608333f-bd78-43b8-98bf-4f1ac6709328] Running
E1129 09:38:37.726110    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/calico-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004227395s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-199935 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xnhwg" [239e6866-2d9b-4e58-b0a6-72fb31477648] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1129 09:38:40.224951    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/custom-flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:40.231422    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/custom-flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:40.242944    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/custom-flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:40.264418    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/custom-flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:40.305845    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/custom-flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:40.387427    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/custom-flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:40.549549    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/custom-flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:40.871216    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/custom-flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xnhwg" [239e6866-2d9b-4e58-b0a6-72fb31477648] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.00402641s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-048081 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-199935 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-048081 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-048081 --alsologtostderr -v=1: (1.170796481s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-048081 -n no-preload-048081
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-048081 -n no-preload-048081: exit status 2 (272.863108ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-048081 -n no-preload-048081
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-048081 -n no-preload-048081: exit status 2 (278.751303ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-048081 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-048081 -n no-preload-048081
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-048081 -n no-preload-048081
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-199935 --alsologtostderr -v=1
E1129 09:38:42.794964    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/custom-flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-199935 --alsologtostderr -v=1: (1.000210166s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-199935 -n embed-certs-199935
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-199935 -n embed-certs-199935: exit status 2 (274.079708ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-199935 -n embed-certs-199935
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-199935 -n embed-certs-199935: exit status 2 (268.413844ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-199935 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-199935 -n embed-certs-199935
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-199935 -n embed-certs-199935
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.31s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.31s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
E1129 09:38:47.967679    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/calico-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.17s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1763503576-21924
iso_test.go:118:   kicbase_version: v0.0.48-1761985721-21837
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: fae26615d717024600f131fc4fa68f9450a9ef29
--- PASS: TestISOImage/VersionJSON (0.17s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.19s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-872325 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.19s)
E1129 09:38:49.534905    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/bridge-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:50.479101    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/custom-flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:51.688583    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kindnet-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:51.695105    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kindnet-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:51.706686    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kindnet-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:51.728211    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kindnet-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:51.769755    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kindnet-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:51.851274    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kindnet-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:52.013556    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kindnet-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:52.335363    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kindnet-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:52.977476    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kindnet-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:38:54.259449    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kindnet-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xnhwg" [239e6866-2d9b-4e58-b0a6-72fb31477648] Running
E1129 09:38:56.821176    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kindnet-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00507679s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-773049 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-773049 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-773049 --alsologtostderr -v=1
E1129 09:39:00.721103    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/custom-flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-773049 -n default-k8s-diff-port-773049
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-773049 -n default-k8s-diff-port-773049: exit status 2 (241.283775ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-773049 -n default-k8s-diff-port-773049
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-773049 -n default-k8s-diff-port-773049: exit status 2 (245.202703ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-773049 --alsologtostderr -v=1
E1129 09:39:01.942992    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kindnet-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-773049 -n default-k8s-diff-port-773049
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-773049 -n default-k8s-diff-port-773049
E1129 09:39:02.992306    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/auto-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-671944 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-671944 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.088263435s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-671944 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-671944 --alsologtostderr -v=3: (10.556736244s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-671944 -n newest-cni-671944
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-671944 -n newest-cni-671944: exit status 7 (62.173407ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-671944 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-671944 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1129 09:39:21.202824    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/custom-flannel-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:39:30.496692    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/bridge-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:39:32.667205    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kindnet-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1129 09:39:40.530479    9613 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/enable-default-cni-473168/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-671944 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (31.727250036s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-671944 -n newest-cni-671944
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-671944 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-671944 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-671944 -n newest-cni-671944
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-671944 -n newest-cni-671944: exit status 2 (205.459758ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-671944 -n newest-cni-671944
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-671944 -n newest-cni-671944: exit status 2 (207.483245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-671944 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-671944 -n newest-cni-671944
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-671944 -n newest-cni-671944
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.24s)

                                                
                                    

Test skip (40/345)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.29
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
138 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
140 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
144 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
261 TestNetworkPlugins/group/kubenet 3.81
269 TestNetworkPlugins/group/cilium 4.62
294 TestStartStop/group/disable-driver-mounts 0.2
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-213983 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-473168 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-473168

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-473168

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-473168

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-473168

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-473168

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-473168

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-473168

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-473168

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-473168

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-473168

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-473168

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-473168" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-473168" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:25:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.10:8443
name: kubernetes-upgrade-553896
contexts:
- context:
cluster: kubernetes-upgrade-553896
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:25:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-553896
name: kubernetes-upgrade-553896
current-context: kubernetes-upgrade-553896
kind: Config
users:
- name: kubernetes-upgrade-553896
user:
client-certificate: /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kubernetes-upgrade-553896/client.crt
client-key: /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kubernetes-upgrade-553896/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-473168

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473168"

                                                
                                                
----------------------- debugLogs end: kubenet-473168 [took: 3.633806239s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-473168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-473168
--- SKIP: TestNetworkPlugins/group/kubenet (3.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-473168 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-473168

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-473168

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-473168

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-473168

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-473168

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-473168

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-473168

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-473168

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-473168

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-473168

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-473168

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-473168" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-473168

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-473168

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-473168

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-473168

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-473168" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-473168" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:25:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.10:8443
name: kubernetes-upgrade-553896
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22000-5651/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:26:02 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.99:8443
name: running-upgrade-501515
contexts:
- context:
cluster: kubernetes-upgrade-553896
extensions:
- extension:
last-update: Sat, 29 Nov 2025 09:25:55 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-553896
name: kubernetes-upgrade-553896
- context:
cluster: running-upgrade-501515
user: running-upgrade-501515
name: running-upgrade-501515
current-context: running-upgrade-501515
kind: Config
users:
- name: kubernetes-upgrade-553896
user:
client-certificate: /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kubernetes-upgrade-553896/client.crt
client-key: /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/kubernetes-upgrade-553896/client.key
- name: running-upgrade-501515
user:
client-certificate: /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/running-upgrade-501515/client.crt
client-key: /home/jenkins/minikube-integration/22000-5651/.minikube/profiles/running-upgrade-501515/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-473168

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-473168" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473168"

                                                
                                                
----------------------- debugLogs end: cilium-473168 [took: 4.43110321s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-473168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-473168
--- SKIP: TestNetworkPlugins/group/cilium (4.62s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-909589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-909589
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard