Test Report: KVM_Linux_crio 22158

                    
                      84cd1e71ac9e612e02e936645952571e7d114b51:2025-12-16:42799
                    
                

Test fail (3/431)

Order failed test Duration
46 TestAddons/parallel/Ingress 158.06
345 TestPreload 144.68
403 TestPause/serial/SecondStartNoReconfiguration 59.23
x
+
TestAddons/parallel/Ingress (158.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-703051 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-703051 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-703051 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [6ecb2063-1677-48a3-8f27-ea2c7d5c93c6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [6ecb2063-1677-48a3-8f27-ea2c7d5c93c6] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.002592889s
I1216 02:28:58.021574    8974 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-703051 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.950482167s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-703051 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.237
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-703051 -n addons-703051
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-703051 logs -n 25: (1.049334083s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-325050                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-325050 │ jenkins │ v1.37.0 │ 16 Dec 25 02:26 UTC │ 16 Dec 25 02:26 UTC │
	│ start   │ --download-only -p binary-mirror-911494 --alsologtostderr --binary-mirror http://127.0.0.1:37719 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-911494 │ jenkins │ v1.37.0 │ 16 Dec 25 02:26 UTC │                     │
	│ delete  │ -p binary-mirror-911494                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-911494 │ jenkins │ v1.37.0 │ 16 Dec 25 02:26 UTC │ 16 Dec 25 02:26 UTC │
	│ addons  │ disable dashboard -p addons-703051                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:26 UTC │                     │
	│ addons  │ enable dashboard -p addons-703051                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:26 UTC │                     │
	│ start   │ -p addons-703051 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:26 UTC │ 16 Dec 25 02:28 UTC │
	│ addons  │ addons-703051 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
	│ addons  │ addons-703051 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
	│ addons  │ enable headlamp -p addons-703051 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
	│ addons  │ addons-703051 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
	│ addons  │ addons-703051 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
	│ ssh     │ addons-703051 ssh cat /opt/local-path-provisioner/pvc-f9648a3b-9c51-449d-b8e4-4a857e52bcbe_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
	│ addons  │ addons-703051 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
	│ addons  │ addons-703051 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
	│ ip      │ addons-703051 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
	│ addons  │ addons-703051 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
	│ addons  │ addons-703051 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
	│ addons  │ addons-703051 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
	│ addons  │ addons-703051 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-703051                                                                                                                                                                                                                                                                                                                                                                                         │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
	│ addons  │ addons-703051 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │ 16 Dec 25 02:28 UTC │
	│ ssh     │ addons-703051 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:28 UTC │                     │
	│ addons  │ addons-703051 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:29 UTC │ 16 Dec 25 02:29 UTC │
	│ addons  │ addons-703051 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:29 UTC │ 16 Dec 25 02:29 UTC │
	│ ip      │ addons-703051 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-703051        │ jenkins │ v1.37.0 │ 16 Dec 25 02:31 UTC │ 16 Dec 25 02:31 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:26:03
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:26:03.746296    9897 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:26:03.746397    9897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:26:03.746405    9897 out.go:374] Setting ErrFile to fd 2...
	I1216 02:26:03.746409    9897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:26:03.746608    9897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 02:26:03.747101    9897 out.go:368] Setting JSON to false
	I1216 02:26:03.747841    9897 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":509,"bootTime":1765851455,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:26:03.747893    9897 start.go:143] virtualization: kvm guest
	I1216 02:26:03.749692    9897 out.go:179] * [addons-703051] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 02:26:03.751621    9897 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:26:03.751399    9897 notify.go:221] Checking for updates...
	I1216 02:26:03.753838    9897 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:26:03.754983    9897 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 02:26:03.756001    9897 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 02:26:03.757055    9897 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 02:26:03.758092    9897 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:26:03.759341    9897 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:26:03.786847    9897 out.go:179] * Using the kvm2 driver based on user configuration
	I1216 02:26:03.787791    9897 start.go:309] selected driver: kvm2
	I1216 02:26:03.787801    9897 start.go:927] validating driver "kvm2" against <nil>
	I1216 02:26:03.787810    9897 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:26:03.788464    9897 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 02:26:03.788675    9897 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 02:26:03.788700    9897 cni.go:84] Creating CNI manager for ""
	I1216 02:26:03.788741    9897 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 02:26:03.788749    9897 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 02:26:03.788782    9897 start.go:353] cluster config:
	{Name:addons-703051 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-703051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1216 02:26:03.788876    9897 iso.go:125] acquiring lock: {Name:mk055aa36b1051bc664b283a8a6fb2af4db94c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:26:03.790149    9897 out.go:179] * Starting "addons-703051" primary control-plane node in "addons-703051" cluster
	I1216 02:26:03.791150    9897 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 02:26:03.791178    9897 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 02:26:03.791183    9897 cache.go:65] Caching tarball of preloaded images
	I1216 02:26:03.791247    9897 preload.go:238] Found /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 02:26:03.791257    9897 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 02:26:03.791518    9897 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/config.json ...
	I1216 02:26:03.791537    9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/config.json: {Name:mkdc721774d5722ea61b35495cae8f72a0381294 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:03.791657    9897 start.go:360] acquireMachinesLock for addons-703051: {Name:mk6501572e7fc03699ef9d932e34f995d8ad6f98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 02:26:03.791710    9897 start.go:364] duration metric: took 41.49µs to acquireMachinesLock for "addons-703051"
	I1216 02:26:03.791727    9897 start.go:93] Provisioning new machine with config: &{Name:addons-703051 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:addons-703051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 02:26:03.791767    9897 start.go:125] createHost starting for "" (driver="kvm2")
	I1216 02:26:03.793122    9897 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1216 02:26:03.793287    9897 start.go:159] libmachine.API.Create for "addons-703051" (driver="kvm2")
	I1216 02:26:03.793315    9897 client.go:173] LocalClient.Create starting
	I1216 02:26:03.793394    9897 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem
	I1216 02:26:03.880330    9897 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem
	I1216 02:26:04.032060    9897 main.go:143] libmachine: creating domain...
	I1216 02:26:04.032081    9897 main.go:143] libmachine: creating network...
	I1216 02:26:04.033639    9897 main.go:143] libmachine: found existing default network
	I1216 02:26:04.033868    9897 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1216 02:26:04.034432    9897 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c90220}
	I1216 02:26:04.034521    9897 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-703051</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1216 02:26:04.040527    9897 main.go:143] libmachine: creating private network mk-addons-703051 192.168.39.0/24...
	I1216 02:26:04.102113    9897 main.go:143] libmachine: private network mk-addons-703051 192.168.39.0/24 created
	I1216 02:26:04.102404    9897 main.go:143] libmachine: <network>
	  <name>mk-addons-703051</name>
	  <uuid>96a0ff09-8e21-4333-9498-c46934f922dd</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:30:99:f8'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1216 02:26:04.102438    9897 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051 ...
	I1216 02:26:04.102461    9897 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22158-5036/.minikube/cache/iso/amd64/minikube-v1.37.0-1765836331-22158-amd64.iso
	I1216 02:26:04.102471    9897 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 02:26:04.102532    9897 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22158-5036/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22158-5036/.minikube/cache/iso/amd64/minikube-v1.37.0-1765836331-22158-amd64.iso...
	I1216 02:26:04.357110    9897 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa...
	I1216 02:26:04.493537    9897 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/addons-703051.rawdisk...
	I1216 02:26:04.493573    9897 main.go:143] libmachine: Writing magic tar header
	I1216 02:26:04.493591    9897 main.go:143] libmachine: Writing SSH key tar header
	I1216 02:26:04.493659    9897 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051 ...
	I1216 02:26:04.493713    9897 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051
	I1216 02:26:04.493747    9897 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051 (perms=drwx------)
	I1216 02:26:04.493763    9897 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22158-5036/.minikube/machines
	I1216 02:26:04.493775    9897 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22158-5036/.minikube/machines (perms=drwxr-xr-x)
	I1216 02:26:04.493788    9897 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 02:26:04.493798    9897 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22158-5036/.minikube (perms=drwxr-xr-x)
	I1216 02:26:04.493806    9897 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22158-5036
	I1216 02:26:04.493814    9897 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22158-5036 (perms=drwxrwxr-x)
	I1216 02:26:04.493825    9897 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1216 02:26:04.493834    9897 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1216 02:26:04.493851    9897 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1216 02:26:04.493861    9897 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1216 02:26:04.493870    9897 main.go:143] libmachine: checking permissions on dir: /home
	I1216 02:26:04.493879    9897 main.go:143] libmachine: skipping /home - not owner
	I1216 02:26:04.493883    9897 main.go:143] libmachine: defining domain...
	I1216 02:26:04.495130    9897 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-703051</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/addons-703051.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-703051'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1216 02:26:04.502470    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:d0:cf:e3 in network default
	I1216 02:26:04.503142    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:04.503160    9897 main.go:143] libmachine: starting domain...
	I1216 02:26:04.503164    9897 main.go:143] libmachine: ensuring networks are active...
	I1216 02:26:04.503855    9897 main.go:143] libmachine: Ensuring network default is active
	I1216 02:26:04.504280    9897 main.go:143] libmachine: Ensuring network mk-addons-703051 is active
	I1216 02:26:04.504973    9897 main.go:143] libmachine: getting domain XML...
	I1216 02:26:04.506231    9897 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-703051</name>
	  <uuid>c4ab45a7-215f-430e-bc6b-14f6c9c94339</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/addons-703051.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:7a:59:00'/>
	      <source network='mk-addons-703051'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:d0:cf:e3'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1216 02:26:05.754583    9897 main.go:143] libmachine: waiting for domain to start...
	I1216 02:26:05.755685    9897 main.go:143] libmachine: domain is now running
	I1216 02:26:05.755699    9897 main.go:143] libmachine: waiting for IP...
	I1216 02:26:05.756381    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:05.756847    9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
	I1216 02:26:05.756859    9897 main.go:143] libmachine: trying to list again with source=arp
	I1216 02:26:05.757132    9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
	I1216 02:26:05.757164    9897 retry.go:31] will retry after 194.356704ms: waiting for domain to come up
	I1216 02:26:05.953523    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:05.954069    9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
	I1216 02:26:05.954086    9897 main.go:143] libmachine: trying to list again with source=arp
	I1216 02:26:05.954380    9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
	I1216 02:26:05.954421    9897 retry.go:31] will retry after 363.516423ms: waiting for domain to come up
	I1216 02:26:06.319807    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:06.320279    9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
	I1216 02:26:06.320293    9897 main.go:143] libmachine: trying to list again with source=arp
	I1216 02:26:06.320536    9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
	I1216 02:26:06.320567    9897 retry.go:31] will retry after 436.798052ms: waiting for domain to come up
	I1216 02:26:06.759226    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:06.759840    9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
	I1216 02:26:06.759855    9897 main.go:143] libmachine: trying to list again with source=arp
	I1216 02:26:06.760212    9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
	I1216 02:26:06.760245    9897 retry.go:31] will retry after 403.662247ms: waiting for domain to come up
	I1216 02:26:07.165830    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:07.166400    9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
	I1216 02:26:07.166415    9897 main.go:143] libmachine: trying to list again with source=arp
	I1216 02:26:07.166676    9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
	I1216 02:26:07.166705    9897 retry.go:31] will retry after 481.547373ms: waiting for domain to come up
	I1216 02:26:07.649835    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:07.650570    9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
	I1216 02:26:07.650595    9897 main.go:143] libmachine: trying to list again with source=arp
	I1216 02:26:07.651002    9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
	I1216 02:26:07.651036    9897 retry.go:31] will retry after 630.696287ms: waiting for domain to come up
	I1216 02:26:08.282796    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:08.283364    9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
	I1216 02:26:08.283378    9897 main.go:143] libmachine: trying to list again with source=arp
	I1216 02:26:08.283654    9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
	I1216 02:26:08.283685    9897 retry.go:31] will retry after 823.417805ms: waiting for domain to come up
	I1216 02:26:09.109082    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:09.109664    9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
	I1216 02:26:09.109680    9897 main.go:143] libmachine: trying to list again with source=arp
	I1216 02:26:09.109955    9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
	I1216 02:26:09.109988    9897 retry.go:31] will retry after 1.344643175s: waiting for domain to come up
	I1216 02:26:10.456175    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:10.456703    9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
	I1216 02:26:10.456721    9897 main.go:143] libmachine: trying to list again with source=arp
	I1216 02:26:10.457007    9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
	I1216 02:26:10.457042    9897 retry.go:31] will retry after 1.518653081s: waiting for domain to come up
	I1216 02:26:11.976717    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:11.977252    9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
	I1216 02:26:11.977276    9897 main.go:143] libmachine: trying to list again with source=arp
	I1216 02:26:11.977562    9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
	I1216 02:26:11.977592    9897 retry.go:31] will retry after 1.82369489s: waiting for domain to come up
	I1216 02:26:13.803556    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:13.804131    9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
	I1216 02:26:13.804153    9897 main.go:143] libmachine: trying to list again with source=arp
	I1216 02:26:13.804484    9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
	I1216 02:26:13.804524    9897 retry.go:31] will retry after 2.904064752s: waiting for domain to come up
	I1216 02:26:16.712141    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:16.712715    9897 main.go:143] libmachine: no network interface addresses found for domain addons-703051 (source=lease)
	I1216 02:26:16.712735    9897 main.go:143] libmachine: trying to list again with source=arp
	I1216 02:26:16.713013    9897 main.go:143] libmachine: unable to find current IP address of domain addons-703051 in network mk-addons-703051 (interfaces detected: [])
	I1216 02:26:16.713051    9897 retry.go:31] will retry after 2.942381057s: waiting for domain to come up
	I1216 02:26:19.657109    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:19.657655    9897 main.go:143] libmachine: domain addons-703051 has current primary IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:19.657669    9897 main.go:143] libmachine: found domain IP: 192.168.39.237
	I1216 02:26:19.657676    9897 main.go:143] libmachine: reserving static IP address...
	I1216 02:26:19.658021    9897 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-703051", mac: "52:54:00:7a:59:00", ip: "192.168.39.237"} in network mk-addons-703051
	I1216 02:26:19.844126    9897 main.go:143] libmachine: reserved static IP address 192.168.39.237 for domain addons-703051
	I1216 02:26:19.844149    9897 main.go:143] libmachine: waiting for SSH...
	I1216 02:26:19.844158    9897 main.go:143] libmachine: Getting to WaitForSSH function...
	I1216 02:26:19.847118    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:19.847656    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:19.847688    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:19.847914    9897 main.go:143] libmachine: Using SSH client type: native
	I1216 02:26:19.848168    9897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1216 02:26:19.848180    9897 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1216 02:26:19.961667    9897 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 02:26:19.962084    9897 main.go:143] libmachine: domain creation complete
	I1216 02:26:19.963552    9897 machine.go:94] provisionDockerMachine start ...
	I1216 02:26:19.965472    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:19.965813    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:19.965838    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:19.965998    9897 main.go:143] libmachine: Using SSH client type: native
	I1216 02:26:19.966189    9897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1216 02:26:19.966199    9897 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 02:26:20.071281    9897 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 02:26:20.071327    9897 buildroot.go:166] provisioning hostname "addons-703051"
	I1216 02:26:20.074100    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:20.074466    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:20.074489    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:20.074640    9897 main.go:143] libmachine: Using SSH client type: native
	I1216 02:26:20.074826    9897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1216 02:26:20.074837    9897 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-703051 && echo "addons-703051" | sudo tee /etc/hostname
	I1216 02:26:20.194474    9897 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-703051
	
	I1216 02:26:20.197735    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:20.198231    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:20.198261    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:20.198410    9897 main.go:143] libmachine: Using SSH client type: native
	I1216 02:26:20.198639    9897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1216 02:26:20.198656    9897 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-703051' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-703051/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-703051' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 02:26:20.314865    9897 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 02:26:20.314895    9897 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5036/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5036/.minikube}
	I1216 02:26:20.314912    9897 buildroot.go:174] setting up certificates
	I1216 02:26:20.314948    9897 provision.go:84] configureAuth start
	I1216 02:26:20.317907    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:20.318277    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:20.318298    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:20.320859    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:20.321199    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:20.321223    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:20.321353    9897 provision.go:143] copyHostCerts
	I1216 02:26:20.321420    9897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/ca.pem (1078 bytes)
	I1216 02:26:20.321534    9897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/cert.pem (1123 bytes)
	I1216 02:26:20.321606    9897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/key.pem (1679 bytes)
	I1216 02:26:20.321676    9897 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem org=jenkins.addons-703051 san=[127.0.0.1 192.168.39.237 addons-703051 localhost minikube]
	I1216 02:26:20.531072    9897 provision.go:177] copyRemoteCerts
	I1216 02:26:20.531126    9897 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 02:26:20.533743    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:20.534076    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:20.534096    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:20.534216    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:20.618881    9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 02:26:20.647139    9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 02:26:20.676061    9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 02:26:20.702870    9897 provision.go:87] duration metric: took 387.893907ms to configureAuth
	I1216 02:26:20.702895    9897 buildroot.go:189] setting minikube options for container-runtime
	I1216 02:26:20.703106    9897 config.go:182] Loaded profile config "addons-703051": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:26:20.705799    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:20.706122    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:20.706144    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:20.706350    9897 main.go:143] libmachine: Using SSH client type: native
	I1216 02:26:20.706536    9897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1216 02:26:20.706550    9897 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 02:26:20.957897    9897 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 02:26:20.957920    9897 machine.go:97] duration metric: took 994.353618ms to provisionDockerMachine
	I1216 02:26:20.957953    9897 client.go:176] duration metric: took 17.164630287s to LocalClient.Create
	I1216 02:26:20.957973    9897 start.go:167] duration metric: took 17.164685125s to libmachine.API.Create "addons-703051"
	I1216 02:26:20.957981    9897 start.go:293] postStartSetup for "addons-703051" (driver="kvm2")
	I1216 02:26:20.957991    9897 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 02:26:20.958044    9897 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 02:26:20.960985    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:20.961434    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:20.961470    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:20.961633    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:21.045310    9897 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 02:26:21.050502    9897 info.go:137] Remote host: Buildroot 2025.02
	I1216 02:26:21.050534    9897 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5036/.minikube/addons for local assets ...
	I1216 02:26:21.050638    9897 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5036/.minikube/files for local assets ...
	I1216 02:26:21.050683    9897 start.go:296] duration metric: took 92.694156ms for postStartSetup
	I1216 02:26:21.053986    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:21.054412    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:21.054441    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:21.054674    9897 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/config.json ...
	I1216 02:26:21.054880    9897 start.go:128] duration metric: took 17.263103723s to createHost
	I1216 02:26:21.057022    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:21.057289    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:21.057306    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:21.057502    9897 main.go:143] libmachine: Using SSH client type: native
	I1216 02:26:21.057730    9897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I1216 02:26:21.057742    9897 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1216 02:26:21.165164    9897 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765851981.127109611
	
	I1216 02:26:21.165186    9897 fix.go:216] guest clock: 1765851981.127109611
	I1216 02:26:21.165194    9897 fix.go:229] Guest: 2025-12-16 02:26:21.127109611 +0000 UTC Remote: 2025-12-16 02:26:21.05489083 +0000 UTC m=+17.351991699 (delta=72.218781ms)
	I1216 02:26:21.165207    9897 fix.go:200] guest clock delta is within tolerance: 72.218781ms
	I1216 02:26:21.165211    9897 start.go:83] releasing machines lock for "addons-703051", held for 17.373492806s
	I1216 02:26:21.168286    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:21.168696    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:21.168715    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:21.169192    9897 ssh_runner.go:195] Run: cat /version.json
	I1216 02:26:21.169260    9897 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 02:26:21.172456    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:21.172533    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:21.172858    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:21.172893    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:21.172914    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:21.172957    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:21.173079    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:21.173250    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:21.250522    9897 ssh_runner.go:195] Run: systemctl --version
	I1216 02:26:21.287625    9897 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 02:26:21.448874    9897 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 02:26:21.455946    9897 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 02:26:21.456010    9897 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 02:26:21.475123    9897 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 02:26:21.475158    9897 start.go:496] detecting cgroup driver to use...
	I1216 02:26:21.475220    9897 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 02:26:21.494824    9897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 02:26:21.511073    9897 docker.go:218] disabling cri-docker service (if available) ...
	I1216 02:26:21.511148    9897 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 02:26:21.528083    9897 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 02:26:21.543455    9897 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 02:26:21.682422    9897 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 02:26:21.901202    9897 docker.go:234] disabling docker service ...
	I1216 02:26:21.901273    9897 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 02:26:21.917132    9897 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 02:26:21.931716    9897 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 02:26:22.084617    9897 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 02:26:22.223713    9897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 02:26:22.239287    9897 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 02:26:22.260906    9897 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 02:26:22.261002    9897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:26:22.272997    9897 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 02:26:22.273056    9897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:26:22.284720    9897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:26:22.296263    9897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:26:22.307583    9897 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 02:26:22.319960    9897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:26:22.331123    9897 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:26:22.350422    9897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 02:26:22.362126    9897 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 02:26:22.372344    9897 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 02:26:22.372409    9897 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 02:26:22.393248    9897 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 02:26:22.406245    9897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:26:22.539378    9897 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 02:26:22.646401    9897 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 02:26:22.646481    9897 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 02:26:22.651542    9897 start.go:564] Will wait 60s for crictl version
	I1216 02:26:22.651614    9897 ssh_runner.go:195] Run: which crictl
	I1216 02:26:22.655344    9897 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 02:26:22.689871    9897 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 02:26:22.690025    9897 ssh_runner.go:195] Run: crio --version
	I1216 02:26:22.718442    9897 ssh_runner.go:195] Run: crio --version
	I1216 02:26:22.747121    9897 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1216 02:26:22.751219    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:22.751576    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:22.751605    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:22.751813    9897 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 02:26:22.756270    9897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 02:26:22.771078    9897 kubeadm.go:884] updating cluster {Name:addons-703051 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
2 ClusterName:addons-703051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 02:26:22.771201    9897 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 02:26:22.771255    9897 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:26:22.800114    9897 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1216 02:26:22.800188    9897 ssh_runner.go:195] Run: which lz4
	I1216 02:26:22.804337    9897 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 02:26:22.808796    9897 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 02:26:22.808834    9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1216 02:26:23.925489    9897 crio.go:462] duration metric: took 1.121189179s to copy over tarball
	I1216 02:26:23.925553    9897 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 02:26:25.260659    9897 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.335077671s)
	I1216 02:26:25.260684    9897 crio.go:469] duration metric: took 1.335169907s to extract the tarball
	I1216 02:26:25.260691    9897 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 02:26:25.297960    9897 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 02:26:25.335867    9897 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 02:26:25.335887    9897 cache_images.go:86] Images are preloaded, skipping loading
	I1216 02:26:25.335893    9897 kubeadm.go:935] updating node { 192.168.39.237 8443 v1.34.2 crio true true} ...
	I1216 02:26:25.335990    9897 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-703051 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-703051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 02:26:25.336052    9897 ssh_runner.go:195] Run: crio config
	I1216 02:26:25.378819    9897 cni.go:84] Creating CNI manager for ""
	I1216 02:26:25.378839    9897 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 02:26:25.378853    9897 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 02:26:25.378878    9897 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-703051 NodeName:addons-703051 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 02:26:25.379041    9897 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-703051"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.237"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 02:26:25.379103    9897 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 02:26:25.391406    9897 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 02:26:25.391473    9897 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 02:26:25.403771    9897 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1216 02:26:25.424115    9897 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 02:26:25.444698    9897 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1216 02:26:25.464297    9897 ssh_runner.go:195] Run: grep 192.168.39.237	control-plane.minikube.internal$ /etc/hosts
	I1216 02:26:25.468231    9897 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.237	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 02:26:25.482512    9897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:26:25.622779    9897 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:26:25.653693    9897 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051 for IP: 192.168.39.237
	I1216 02:26:25.653718    9897 certs.go:195] generating shared ca certs ...
	I1216 02:26:25.653733    9897 certs.go:227] acquiring lock for ca certs: {Name:mk77e952ddad6d1f2b7d1d07b6d50cdef35b56ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:25.653873    9897 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5036/.minikube/ca.key
	I1216 02:26:25.699828    9897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt ...
	I1216 02:26:25.699859    9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt: {Name:mk96cbe67fb452e3df3335485db75f2b8d2e1ce5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:25.700033    9897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5036/.minikube/ca.key ...
	I1216 02:26:25.700045    9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/ca.key: {Name:mk50341eeb18c15b6a2b99322b38074283292ab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:25.700115    9897 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.key
	I1216 02:26:25.754939    9897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.crt ...
	I1216 02:26:25.754964    9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.crt: {Name:mk92d577e4f40a75e029f362bc1e4f62e633c62b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:25.755109    9897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.key ...
	I1216 02:26:25.755129    9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.key: {Name:mk3216f3bfbcf2ff0102997b68be97acb112f4c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:25.755216    9897 certs.go:257] generating profile certs ...
	I1216 02:26:25.755276    9897 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.key
	I1216 02:26:25.755295    9897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt with IP's: []
	I1216 02:26:25.780115    9897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt ...
	I1216 02:26:25.780133    9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: {Name:mk73766b0106d430ea9ac5c15a4dda9ff5c3e32c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:25.780258    9897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.key ...
	I1216 02:26:25.780268    9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.key: {Name:mkcd34d963e04bce13bf159c3cf006123bf5dbe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:25.780330    9897 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.key.110bcffd
	I1216 02:26:25.780346    9897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.crt.110bcffd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.237]
	I1216 02:26:25.882145    9897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.crt.110bcffd ...
	I1216 02:26:25.882173    9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.crt.110bcffd: {Name:mkd13ac21a13491c25f23352f1398d3ad162c18b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:25.882315    9897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.key.110bcffd ...
	I1216 02:26:25.882327    9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.key.110bcffd: {Name:mk4dfddb5bd59db476243c45983ffa412c6ec82d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:25.882397    9897 certs.go:382] copying /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.crt.110bcffd -> /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.crt
	I1216 02:26:25.882465    9897 certs.go:386] copying /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.key.110bcffd -> /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.key
	I1216 02:26:25.882506    9897 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/proxy-client.key
	I1216 02:26:25.882524    9897 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/proxy-client.crt with IP's: []
	I1216 02:26:25.932572    9897 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/proxy-client.crt ...
	I1216 02:26:25.932595    9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/proxy-client.crt: {Name:mk972001f2d21f7f6944ec53f3ff7c468aa275cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:25.932726    9897 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/proxy-client.key ...
	I1216 02:26:25.932737    9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/proxy-client.key: {Name:mkcf2a9bfe0760b09e02de6da2cbf550e9448e5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:25.932891    9897 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 02:26:25.932936    9897 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem (1078 bytes)
	I1216 02:26:25.932964    9897 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem (1123 bytes)
	I1216 02:26:25.932985    9897 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem (1679 bytes)
	I1216 02:26:25.933481    9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 02:26:25.962794    9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 02:26:25.991606    9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 02:26:26.019154    9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 02:26:26.045872    9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 02:26:26.073577    9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 02:26:26.100306    9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 02:26:26.128965    9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 02:26:26.156176    9897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 02:26:26.182603    9897 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 02:26:26.201153    9897 ssh_runner.go:195] Run: openssl version
	I1216 02:26:26.207055    9897 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:26:26.217778    9897 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 02:26:26.228473    9897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:26:26.233042    9897 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:26:26.233086    9897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 02:26:26.240675    9897 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 02:26:26.252375    9897 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 02:26:26.263538    9897 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 02:26:26.268010    9897 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 02:26:26.268070    9897 kubeadm.go:401] StartCluster: {Name:addons-703051 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 C
lusterName:addons-703051 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:26:26.268159    9897 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 02:26:26.268213    9897 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 02:26:26.300826    9897 cri.go:89] found id: ""
	I1216 02:26:26.300880    9897 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 02:26:26.314662    9897 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 02:26:26.326632    9897 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 02:26:26.341701    9897 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 02:26:26.341718    9897 kubeadm.go:158] found existing configuration files:
	
	I1216 02:26:26.341753    9897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 02:26:26.355181    9897 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 02:26:26.355230    9897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 02:26:26.369476    9897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 02:26:26.379830    9897 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 02:26:26.379883    9897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 02:26:26.390614    9897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 02:26:26.400363    9897 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 02:26:26.400416    9897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 02:26:26.411747    9897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 02:26:26.421789    9897 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 02:26:26.421842    9897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 02:26:26.432321    9897 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 02:26:26.576766    9897 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 02:26:39.439411    9897 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 02:26:39.439481    9897 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 02:26:39.439568    9897 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 02:26:39.439676    9897 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 02:26:39.439792    9897 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 02:26:39.439886    9897 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 02:26:39.441191    9897 out.go:252]   - Generating certificates and keys ...
	I1216 02:26:39.441285    9897 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 02:26:39.441364    9897 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 02:26:39.441452    9897 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 02:26:39.441543    9897 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 02:26:39.441612    9897 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 02:26:39.441693    9897 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 02:26:39.441783    9897 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 02:26:39.441920    9897 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-703051 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
	I1216 02:26:39.442007    9897 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 02:26:39.442176    9897 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-703051 localhost] and IPs [192.168.39.237 127.0.0.1 ::1]
	I1216 02:26:39.442287    9897 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 02:26:39.442375    9897 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 02:26:39.442412    9897 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 02:26:39.442457    9897 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 02:26:39.442497    9897 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 02:26:39.442557    9897 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 02:26:39.442618    9897 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 02:26:39.442684    9897 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 02:26:39.442752    9897 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 02:26:39.442888    9897 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 02:26:39.442978    9897 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 02:26:39.445052    9897 out.go:252]   - Booting up control plane ...
	I1216 02:26:39.445157    9897 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 02:26:39.445236    9897 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 02:26:39.445293    9897 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 02:26:39.445427    9897 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 02:26:39.445555    9897 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 02:26:39.445688    9897 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 02:26:39.445773    9897 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 02:26:39.445806    9897 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 02:26:39.445969    9897 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 02:26:39.446107    9897 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 02:26:39.446486    9897 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501620119s
	I1216 02:26:39.446599    9897 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 02:26:39.446705    9897 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.237:8443/livez
	I1216 02:26:39.446800    9897 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 02:26:39.446888    9897 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 02:26:39.447001    9897 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.119131436s
	I1216 02:26:39.447105    9897 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.570120988s
	I1216 02:26:39.447201    9897 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501413189s
	I1216 02:26:39.447380    9897 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 02:26:39.447548    9897 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 02:26:39.447649    9897 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 02:26:39.447836    9897 kubeadm.go:319] [mark-control-plane] Marking the node addons-703051 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 02:26:39.447914    9897 kubeadm.go:319] [bootstrap-token] Using token: uz28uy.dsgpl4o4zuxnmuzz
	I1216 02:26:39.449354    9897 out.go:252]   - Configuring RBAC rules ...
	I1216 02:26:39.449495    9897 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 02:26:39.449600    9897 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 02:26:39.449764    9897 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 02:26:39.449936    9897 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 02:26:39.450036    9897 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 02:26:39.450107    9897 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 02:26:39.450212    9897 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 02:26:39.450250    9897 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 02:26:39.450304    9897 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 02:26:39.450312    9897 kubeadm.go:319] 
	I1216 02:26:39.450366    9897 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 02:26:39.450372    9897 kubeadm.go:319] 
	I1216 02:26:39.450458    9897 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 02:26:39.450464    9897 kubeadm.go:319] 
	I1216 02:26:39.450484    9897 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 02:26:39.450580    9897 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 02:26:39.450666    9897 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 02:26:39.450675    9897 kubeadm.go:319] 
	I1216 02:26:39.450751    9897 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 02:26:39.450765    9897 kubeadm.go:319] 
	I1216 02:26:39.450841    9897 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 02:26:39.450854    9897 kubeadm.go:319] 
	I1216 02:26:39.450947    9897 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 02:26:39.451049    9897 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 02:26:39.451144    9897 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 02:26:39.451152    9897 kubeadm.go:319] 
	I1216 02:26:39.451251    9897 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 02:26:39.451350    9897 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 02:26:39.451359    9897 kubeadm.go:319] 
	I1216 02:26:39.451463    9897 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token uz28uy.dsgpl4o4zuxnmuzz \
	I1216 02:26:39.451593    9897 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6d3bac17af9f836812b78bb65fe3149db071d191150485ad31b907e98cbc14f1 \
	I1216 02:26:39.451623    9897 kubeadm.go:319] 	--control-plane 
	I1216 02:26:39.451632    9897 kubeadm.go:319] 
	I1216 02:26:39.451730    9897 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 02:26:39.451742    9897 kubeadm.go:319] 
	I1216 02:26:39.451852    9897 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token uz28uy.dsgpl4o4zuxnmuzz \
	I1216 02:26:39.452047    9897 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6d3bac17af9f836812b78bb65fe3149db071d191150485ad31b907e98cbc14f1 
	I1216 02:26:39.452060    9897 cni.go:84] Creating CNI manager for ""
	I1216 02:26:39.452066    9897 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 02:26:39.453397    9897 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 02:26:39.454508    9897 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 02:26:39.471915    9897 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 02:26:39.497529    9897 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 02:26:39.497668    9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:39.497670    9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-703051 minikube.k8s.io/updated_at=2025_12_16T02_26_39_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=addons-703051 minikube.k8s.io/primary=true
	I1216 02:26:39.543664    9897 ops.go:34] apiserver oom_adj: -16
	I1216 02:26:39.616196    9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:40.117125    9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:40.616670    9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:41.117092    9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:41.616319    9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:42.116330    9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:42.616795    9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:43.116301    9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:43.616287    9897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 02:26:43.690518    9897 kubeadm.go:1114] duration metric: took 4.192920237s to wait for elevateKubeSystemPrivileges
	I1216 02:26:43.690568    9897 kubeadm.go:403] duration metric: took 17.422501356s to StartCluster
	I1216 02:26:43.690591    9897 settings.go:142] acquiring lock: {Name:mk546ecdfe1860ae68a814905b53e6453298b4fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:43.690738    9897 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 02:26:43.691209    9897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/kubeconfig: {Name:mk6832d71ef0ad581fa898dceefc2fcc2fd665b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 02:26:43.691425    9897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 02:26:43.691456    9897 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 02:26:43.691630    9897 config.go:182] Loaded profile config "addons-703051": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:26:43.691578    9897 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1216 02:26:43.691751    9897 addons.go:70] Setting gcp-auth=true in profile "addons-703051"
	I1216 02:26:43.691768    9897 addons.go:70] Setting yakd=true in profile "addons-703051"
	I1216 02:26:43.691778    9897 addons.go:70] Setting ingress=true in profile "addons-703051"
	I1216 02:26:43.691789    9897 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-703051"
	I1216 02:26:43.691795    9897 addons.go:239] Setting addon ingress=true in "addons-703051"
	I1216 02:26:43.691800    9897 addons.go:239] Setting addon yakd=true in "addons-703051"
	I1216 02:26:43.691814    9897 addons.go:70] Setting cloud-spanner=true in profile "addons-703051"
	I1216 02:26:43.691831    9897 host.go:66] Checking if "addons-703051" exists ...
	I1216 02:26:43.691825    9897 addons.go:70] Setting storage-provisioner=true in profile "addons-703051"
	I1216 02:26:43.691840    9897 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-703051"
	I1216 02:26:43.691842    9897 addons.go:239] Setting addon cloud-spanner=true in "addons-703051"
	I1216 02:26:43.691848    9897 addons.go:239] Setting addon storage-provisioner=true in "addons-703051"
	I1216 02:26:43.691852    9897 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-703051"
	I1216 02:26:43.691856    9897 addons.go:70] Setting volumesnapshots=true in profile "addons-703051"
	I1216 02:26:43.691869    9897 host.go:66] Checking if "addons-703051" exists ...
	I1216 02:26:43.691874    9897 addons.go:239] Setting addon volumesnapshots=true in "addons-703051"
	I1216 02:26:43.691877    9897 host.go:66] Checking if "addons-703051" exists ...
	I1216 02:26:43.691880    9897 addons.go:70] Setting registry=true in profile "addons-703051"
	I1216 02:26:43.691882    9897 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-703051"
	I1216 02:26:43.691890    9897 addons.go:70] Setting ingress-dns=true in profile "addons-703051"
	I1216 02:26:43.691903    9897 addons.go:70] Setting default-storageclass=true in profile "addons-703051"
	I1216 02:26:43.691904    9897 addons.go:70] Setting metrics-server=true in profile "addons-703051"
	I1216 02:26:43.691909    9897 host.go:66] Checking if "addons-703051" exists ...
	I1216 02:26:43.691780    9897 mustload.go:66] Loading cluster: addons-703051
	I1216 02:26:43.691917    9897 addons.go:239] Setting addon metrics-server=true in "addons-703051"
	I1216 02:26:43.691918    9897 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-703051"
	I1216 02:26:43.691952    9897 host.go:66] Checking if "addons-703051" exists ...
	I1216 02:26:43.692091    9897 config.go:182] Loaded profile config "addons-703051": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:26:43.691840    9897 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-703051"
	I1216 02:26:43.692420    9897 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-703051"
	I1216 02:26:43.691865    9897 addons.go:70] Setting registry-creds=true in profile "addons-703051"
	I1216 02:26:43.692725    9897 addons.go:239] Setting addon registry-creds=true in "addons-703051"
	I1216 02:26:43.691832    9897 host.go:66] Checking if "addons-703051" exists ...
	I1216 02:26:43.691882    9897 addons.go:70] Setting inspektor-gadget=true in profile "addons-703051"
	I1216 02:26:43.692791    9897 addons.go:239] Setting addon inspektor-gadget=true in "addons-703051"
	I1216 02:26:43.692813    9897 host.go:66] Checking if "addons-703051" exists ...
	I1216 02:26:43.691893    9897 host.go:66] Checking if "addons-703051" exists ...
	I1216 02:26:43.691849    9897 addons.go:70] Setting volcano=true in profile "addons-703051"
	I1216 02:26:43.693074    9897 addons.go:239] Setting addon volcano=true in "addons-703051"
	I1216 02:26:43.693102    9897 host.go:66] Checking if "addons-703051" exists ...
	I1216 02:26:43.691871    9897 host.go:66] Checking if "addons-703051" exists ...
	I1216 02:26:43.693408    9897 out.go:179] * Verifying Kubernetes components...
	I1216 02:26:43.692750    9897 host.go:66] Checking if "addons-703051" exists ...
	I1216 02:26:43.691894    9897 addons.go:239] Setting addon registry=true in "addons-703051"
	I1216 02:26:43.693607    9897 host.go:66] Checking if "addons-703051" exists ...
	I1216 02:26:43.691907    9897 addons.go:239] Setting addon ingress-dns=true in "addons-703051"
	I1216 02:26:43.693680    9897 host.go:66] Checking if "addons-703051" exists ...
	I1216 02:26:43.691769    9897 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-703051"
	I1216 02:26:43.693711    9897 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-703051"
	I1216 02:26:43.693730    9897 host.go:66] Checking if "addons-703051" exists ...
	I1216 02:26:43.694968    9897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 02:26:43.697700    9897 host.go:66] Checking if "addons-703051" exists ...
	I1216 02:26:43.699054    9897 addons.go:239] Setting addon default-storageclass=true in "addons-703051"
	I1216 02:26:43.699084    9897 host.go:66] Checking if "addons-703051" exists ...
	I1216 02:26:43.700025    9897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1216 02:26:43.700052    9897 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 02:26:43.700091    9897 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1216 02:26:43.700750    9897 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1216 02:26:43.701123    9897 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-703051"
	I1216 02:26:43.700762    9897 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1216 02:26:43.701152    9897 host.go:66] Checking if "addons-703051" exists ...
	W1216 02:26:43.701896    9897 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1216 02:26:43.702018    9897 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1216 02:26:43.702043    9897 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 02:26:43.702457    9897 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 02:26:43.702052    9897 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 02:26:43.702577    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1216 02:26:43.702772    9897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1216 02:26:43.702779    9897 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1216 02:26:43.702789    9897 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 02:26:43.702802    9897 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1216 02:26:43.702827    9897 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1216 02:26:43.702847    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1216 02:26:43.703717    9897 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1216 02:26:43.703733    9897 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1216 02:26:43.703742    9897 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1216 02:26:43.704005    9897 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 02:26:43.704232    9897 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 02:26:43.707366    9897 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1216 02:26:43.707377    9897 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1216 02:26:43.707379    9897 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1216 02:26:43.707390    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1216 02:26:43.707403    9897 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1216 02:26:43.707381    9897 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1216 02:26:43.707530    9897 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 02:26:43.707538    9897 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1216 02:26:43.708207    9897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1216 02:26:43.708214    9897 out.go:179]   - Using image docker.io/busybox:stable
	I1216 02:26:43.708650    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.708793    9897 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 02:26:43.708807    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1216 02:26:43.708821    9897 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 02:26:43.708831    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1216 02:26:43.708793    9897 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:26:43.708872    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 02:26:43.709053    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.709408    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.709569    9897 out.go:179]   - Using image docker.io/registry:3.0.0
	I1216 02:26:43.709576    9897 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1216 02:26:43.709648    9897 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 02:26:43.709662    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1216 02:26:43.710221    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:43.710250    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.710341    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.710435    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:43.710470    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.710689    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:43.710718    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.710767    9897 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1216 02:26:43.710811    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:43.710866    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.711039    9897 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 02:26:43.711097    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1216 02:26:43.711248    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:43.711331    9897 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1216 02:26:43.711344    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1216 02:26:43.711619    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:43.711650    9897 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1216 02:26:43.712008    9897 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 02:26:43.712030    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1216 02:26:43.712355    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:43.712388    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.712585    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:43.712614    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.713120    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:43.713193    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:43.713732    9897 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1216 02:26:43.714975    9897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1216 02:26:43.716003    9897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1216 02:26:43.717016    9897 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1216 02:26:43.717158    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.717899    9897 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1216 02:26:43.717953    9897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1216 02:26:43.718053    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.718182    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:43.718213    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.718446    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.718864    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:43.719070    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.719177    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.719652    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:43.719685    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.719905    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:43.719955    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.719982    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.720109    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:43.720373    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:43.720445    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.720401    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:43.720531    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.720573    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:43.720797    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:43.721150    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.721227    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:43.721296    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:43.721322    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.721329    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.721560    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:43.721852    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.721912    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:43.721956    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.722303    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:43.722509    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:43.722545    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.722595    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:43.722641    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.722705    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:43.722915    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:43.723949    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.724446    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:43.724478    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:43.724644    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	W1216 02:26:43.917697    9897 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54634->192.168.39.237:22: read: connection reset by peer
	I1216 02:26:43.917730    9897 retry.go:31] will retry after 260.18533ms: ssh: handshake failed: read tcp 192.168.39.1:54634->192.168.39.237:22: read: connection reset by peer
	W1216 02:26:43.946111    9897 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:54652->192.168.39.237:22: read: connection reset by peer
	I1216 02:26:43.946141    9897 retry.go:31] will retry after 290.504819ms: ssh: handshake failed: read tcp 192.168.39.1:54652->192.168.39.237:22: read: connection reset by peer
	I1216 02:26:44.022122    9897 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 02:26:44.054036    9897 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 02:26:44.299895    9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 02:26:44.300699    9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1216 02:26:44.334354    9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 02:26:44.380754    9897 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1216 02:26:44.380780    9897 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1216 02:26:44.392050    9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 02:26:44.431041    9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 02:26:44.484521    9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 02:26:44.523606    9897 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1216 02:26:44.523626    9897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1216 02:26:44.539740    9897 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 02:26:44.539759    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1216 02:26:44.544336    9897 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1216 02:26:44.544359    9897 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1216 02:26:44.549857    9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 02:26:44.559534    9897 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1216 02:26:44.559552    9897 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1216 02:26:44.560772    9897 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1216 02:26:44.560792    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1216 02:26:44.584689    9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1216 02:26:44.879456    9897 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1216 02:26:44.879486    9897 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1216 02:26:44.962072    9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 02:26:45.026319    9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1216 02:26:45.042389    9897 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 02:26:45.042417    9897 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 02:26:45.056839    9897 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1216 02:26:45.056873    9897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1216 02:26:45.130086    9897 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1216 02:26:45.130110    9897 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1216 02:26:45.260473    9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1216 02:26:45.385715    9897 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 02:26:45.385772    9897 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 02:26:45.420181    9897 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1216 02:26:45.420212    9897 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1216 02:26:45.574777    9897 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1216 02:26:45.574811    9897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1216 02:26:45.624452    9897 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1216 02:26:45.624503    9897 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1216 02:26:45.733011    9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 02:26:45.982219    9897 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1216 02:26:45.982247    9897 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1216 02:26:46.167269    9897 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1216 02:26:46.167308    9897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1216 02:26:46.196824    9897 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1216 02:26:46.196853    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1216 02:26:46.308954    9897 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 02:26:46.308983    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1216 02:26:46.455317    9897 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1216 02:26:46.455351    9897 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1216 02:26:46.535473    9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1216 02:26:46.663469    9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 02:26:46.804093    9897 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1216 02:26:46.804121    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1216 02:26:47.014056    9897 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.991890726s)
	I1216 02:26:47.014098    9897 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1216 02:26:47.014127    9897 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.960060673s)
	I1216 02:26:47.014768    9897 node_ready.go:35] waiting up to 6m0s for node "addons-703051" to be "Ready" ...
	I1216 02:26:47.039570    9897 node_ready.go:49] node "addons-703051" is "Ready"
	I1216 02:26:47.039601    9897 node_ready.go:38] duration metric: took 24.815989ms for node "addons-703051" to be "Ready" ...
	I1216 02:26:47.039612    9897 api_server.go:52] waiting for apiserver process to appear ...
	I1216 02:26:47.039659    9897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:26:47.153509    9897 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1216 02:26:47.153537    9897 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1216 02:26:47.544722    9897 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-703051" context rescaled to 1 replicas
	I1216 02:26:47.685399    9897 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1216 02:26:47.685438    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1216 02:26:48.041556    9897 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1216 02:26:48.041577    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1216 02:26:48.291264    9897 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 02:26:48.291294    9897 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1216 02:26:48.443449    9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 02:26:50.566888    9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (6.266161165s)
	I1216 02:26:50.567006    9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.267075182s)
	I1216 02:26:50.567102    9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.2327195s)
	I1216 02:26:50.567152    9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.175073705s)
	I1216 02:26:50.567220    9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.136144795s)
	I1216 02:26:50.567251    9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.082702609s)
	I1216 02:26:50.567297    9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (6.017402841s)
	I1216 02:26:50.567337    9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.982620356s)
	W1216 02:26:50.693849    9897 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1216 02:26:51.190990    9897 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1216 02:26:51.193683    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:51.194106    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:51.194132    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:51.194319    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:51.378931    9897 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1216 02:26:51.456721    9897 addons.go:239] Setting addon gcp-auth=true in "addons-703051"
	I1216 02:26:51.456777    9897 host.go:66] Checking if "addons-703051" exists ...
	I1216 02:26:51.458873    9897 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1216 02:26:51.461743    9897 main.go:143] libmachine: domain addons-703051 has defined MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:51.462191    9897 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7a:59:00", ip: ""} in network mk-addons-703051: {Iface:virbr1 ExpiryTime:2025-12-16 03:26:18 +0000 UTC Type:0 Mac:52:54:00:7a:59:00 Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:addons-703051 Clientid:01:52:54:00:7a:59:00}
	I1216 02:26:51.462227    9897 main.go:143] libmachine: domain addons-703051 has defined IP address 192.168.39.237 and MAC address 52:54:00:7a:59:00 in network mk-addons-703051
	I1216 02:26:51.462403    9897 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/addons-703051/id_rsa Username:docker}
	I1216 02:26:52.676571    9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.714459473s)
	I1216 02:26:52.676613    9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.650263291s)
	I1216 02:26:52.676615    9897 addons.go:495] Verifying addon ingress=true in "addons-703051"
	I1216 02:26:52.676626    9897 addons.go:495] Verifying addon registry=true in "addons-703051"
	I1216 02:26:52.676775    9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.943733818s)
	I1216 02:26:52.676799    9897 addons.go:495] Verifying addon metrics-server=true in "addons-703051"
	I1216 02:26:52.676856    9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.141346096s)
	I1216 02:26:52.676689    9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (7.416182696s)
	I1216 02:26:52.676963    9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.013450778s)
	I1216 02:26:52.677001    9897 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.63732773s)
	I1216 02:26:52.677026    9897 api_server.go:72] duration metric: took 8.985541628s to wait for apiserver process to appear ...
	I1216 02:26:52.677036    9897 api_server.go:88] waiting for apiserver healthz status ...
	I1216 02:26:52.677053    9897 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	W1216 02:26:52.676999    9897 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 02:26:52.677161    9897 retry.go:31] will retry after 206.529654ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 02:26:52.678652    9897 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-703051 service yakd-dashboard -n yakd-dashboard
	
	I1216 02:26:52.678664    9897 out.go:179] * Verifying registry addon...
	I1216 02:26:52.678669    9897 out.go:179] * Verifying ingress addon...
	I1216 02:26:52.680761    9897 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1216 02:26:52.681031    9897 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1216 02:26:52.714909    9897 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
	ok
	I1216 02:26:52.730295    9897 api_server.go:141] control plane version: v1.34.2
	I1216 02:26:52.730330    9897 api_server.go:131] duration metric: took 53.286434ms to wait for apiserver health ...
	I1216 02:26:52.730342    9897 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 02:26:52.730342    9897 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1216 02:26:52.730360    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:52.730640    9897 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 02:26:52.730651    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:52.757680    9897 system_pods.go:59] 17 kube-system pods found
	I1216 02:26:52.757717    9897 system_pods.go:61] "amd-gpu-device-plugin-4fpsx" [03ef77d5-d326-4953-8e23-ca6c08e8e512] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 02:26:52.757728    9897 system_pods.go:61] "coredns-66bc5c9577-4tgqh" [4edb0229-7f11-4e58-90a8-01dc7c8fe069] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 02:26:52.757739    9897 system_pods.go:61] "coredns-66bc5c9577-njd54" [fcee9a3a-3aad-44ae-b91c-62813e31b787] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 02:26:52.757746    9897 system_pods.go:61] "etcd-addons-703051" [6e78bbfd-5089-4514-8052-9e857a63cf57] Running
	I1216 02:26:52.757752    9897 system_pods.go:61] "kube-apiserver-addons-703051" [68ed3f4f-ce1c-4adb-a6e8-86a57e309fb6] Running
	I1216 02:26:52.757757    9897 system_pods.go:61] "kube-controller-manager-addons-703051" [83da4a9d-056e-4e75-b40b-030d0a61647f] Running
	I1216 02:26:52.757762    9897 system_pods.go:61] "kube-ingress-dns-minikube" [c3e6df2b-852c-4522-9103-54ca55c8c849] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 02:26:52.757766    9897 system_pods.go:61] "kube-proxy-mwxm8" [064f9463-1ca7-46b8-8428-a3450e6a50a7] Running
	I1216 02:26:52.757802    9897 system_pods.go:61] "kube-scheduler-addons-703051" [0b5f5d06-4a4f-4a55-826b-917b981d723a] Running
	I1216 02:26:52.757813    9897 system_pods.go:61] "metrics-server-85b7d694d7-f4xbr" [972a1533-af9a-480f-a4fb-80c6f4653290] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 02:26:52.757819    9897 system_pods.go:61] "nvidia-device-plugin-daemonset-dj88n" [aba0db89-f004-4cbb-880e-fda531ad78c4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 02:26:52.757826    9897 system_pods.go:61] "registry-6b586f9694-l9ptj" [96cdab4e-1722-4bce-87dc-d0c270e803a6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 02:26:52.757832    9897 system_pods.go:61] "registry-creds-764b6fb674-cx22t" [f12a412c-c2cf-4510-8362-2985e7f119b5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 02:26:52.757838    9897 system_pods.go:61] "registry-proxy-qx2bk" [ceaffdc5-fb32-4337-a3c4-e6a2a1d6a2b2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 02:26:52.757845    9897 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9q7t2" [3bbe3e1d-3ff2-43f5-a29d-006fbdfebbae] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:52.757851    9897 system_pods.go:61] "snapshot-controller-7d9fbc56b8-t2tmt" [b57f6aee-1317-4615-9a50-0c808f07c954] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:52.757859    9897 system_pods.go:61] "storage-provisioner" [2daa6974-1bd3-4976-87ef-c939bb232e93] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 02:26:52.757864    9897 system_pods.go:74] duration metric: took 27.515371ms to wait for pod list to return data ...
	I1216 02:26:52.757872    9897 default_sa.go:34] waiting for default service account to be created ...
	I1216 02:26:52.771524    9897 default_sa.go:45] found service account: "default"
	I1216 02:26:52.771543    9897 default_sa.go:55] duration metric: took 13.663789ms for default service account to be created ...
	I1216 02:26:52.771551    9897 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 02:26:52.853152    9897 system_pods.go:86] 17 kube-system pods found
	I1216 02:26:52.853179    9897 system_pods.go:89] "amd-gpu-device-plugin-4fpsx" [03ef77d5-d326-4953-8e23-ca6c08e8e512] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1216 02:26:52.853187    9897 system_pods.go:89] "coredns-66bc5c9577-4tgqh" [4edb0229-7f11-4e58-90a8-01dc7c8fe069] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 02:26:52.853194    9897 system_pods.go:89] "coredns-66bc5c9577-njd54" [fcee9a3a-3aad-44ae-b91c-62813e31b787] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 02:26:52.853199    9897 system_pods.go:89] "etcd-addons-703051" [6e78bbfd-5089-4514-8052-9e857a63cf57] Running
	I1216 02:26:52.853204    9897 system_pods.go:89] "kube-apiserver-addons-703051" [68ed3f4f-ce1c-4adb-a6e8-86a57e309fb6] Running
	I1216 02:26:52.853207    9897 system_pods.go:89] "kube-controller-manager-addons-703051" [83da4a9d-056e-4e75-b40b-030d0a61647f] Running
	I1216 02:26:52.853212    9897 system_pods.go:89] "kube-ingress-dns-minikube" [c3e6df2b-852c-4522-9103-54ca55c8c849] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1216 02:26:52.853217    9897 system_pods.go:89] "kube-proxy-mwxm8" [064f9463-1ca7-46b8-8428-a3450e6a50a7] Running
	I1216 02:26:52.853221    9897 system_pods.go:89] "kube-scheduler-addons-703051" [0b5f5d06-4a4f-4a55-826b-917b981d723a] Running
	I1216 02:26:52.853226    9897 system_pods.go:89] "metrics-server-85b7d694d7-f4xbr" [972a1533-af9a-480f-a4fb-80c6f4653290] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1216 02:26:52.853235    9897 system_pods.go:89] "nvidia-device-plugin-daemonset-dj88n" [aba0db89-f004-4cbb-880e-fda531ad78c4] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1216 02:26:52.853241    9897 system_pods.go:89] "registry-6b586f9694-l9ptj" [96cdab4e-1722-4bce-87dc-d0c270e803a6] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1216 02:26:52.853246    9897 system_pods.go:89] "registry-creds-764b6fb674-cx22t" [f12a412c-c2cf-4510-8362-2985e7f119b5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1216 02:26:52.853252    9897 system_pods.go:89] "registry-proxy-qx2bk" [ceaffdc5-fb32-4337-a3c4-e6a2a1d6a2b2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1216 02:26:52.853257    9897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9q7t2" [3bbe3e1d-3ff2-43f5-a29d-006fbdfebbae] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:52.853263    9897 system_pods.go:89] "snapshot-controller-7d9fbc56b8-t2tmt" [b57f6aee-1317-4615-9a50-0c808f07c954] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1216 02:26:52.853267    9897 system_pods.go:89] "storage-provisioner" [2daa6974-1bd3-4976-87ef-c939bb232e93] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 02:26:52.853274    9897 system_pods.go:126] duration metric: took 81.71908ms to wait for k8s-apps to be running ...
	I1216 02:26:52.853282    9897 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 02:26:52.853323    9897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 02:26:52.884839    9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 02:26:53.205889    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:53.206025    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:53.339793    9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.896291209s)
	I1216 02:26:53.339841    9897 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-703051"
	I1216 02:26:53.339861    9897 system_svc.go:56] duration metric: took 486.57168ms WaitForService to wait for kubelet
	I1216 02:26:53.339880    9897 kubeadm.go:587] duration metric: took 9.648393662s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 02:26:53.339907    9897 node_conditions.go:102] verifying NodePressure condition ...
	I1216 02:26:53.339810    9897 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.880911227s)
	I1216 02:26:53.341755    9897 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1216 02:26:53.341770    9897 out.go:179] * Verifying csi-hostpath-driver addon...
	I1216 02:26:53.343011    9897 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1216 02:26:53.343754    9897 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1216 02:26:53.344156    9897 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1216 02:26:53.344170    9897 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1216 02:26:53.353475    9897 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 02:26:53.353494    9897 node_conditions.go:123] node cpu capacity is 2
	I1216 02:26:53.353514    9897 node_conditions.go:105] duration metric: took 13.601339ms to run NodePressure ...
	I1216 02:26:53.353532    9897 start.go:242] waiting for startup goroutines ...
	I1216 02:26:53.364017    9897 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 02:26:53.364031    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:53.471542    9897 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1216 02:26:53.471565    9897 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1216 02:26:53.539480    9897 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 02:26:53.539502    9897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1216 02:26:53.614071    9897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 02:26:53.684148    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:53.687214    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:53.856377    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:54.187075    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:54.187110    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:54.351984    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:54.640966    9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.756084415s)
	I1216 02:26:54.715367    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:54.715551    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:54.787145    9897 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.173034789s)
	I1216 02:26:54.788117    9897 addons.go:495] Verifying addon gcp-auth=true in "addons-703051"
	I1216 02:26:54.790120    9897 out.go:179] * Verifying gcp-auth addon...
	I1216 02:26:54.791739    9897 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1216 02:26:54.816956    9897 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1216 02:26:54.816979    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:54.909749    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:55.187509    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:55.189822    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:55.297247    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:55.348628    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:55.687973    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:55.688529    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:55.797538    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:55.849598    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:56.185902    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:56.187088    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:56.300657    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:56.351244    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:56.690616    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:56.691152    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:56.799607    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:56.849765    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:57.185827    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:57.186015    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:57.295668    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:57.349110    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:57.684651    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:57.684841    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:57.794498    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:57.848958    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:58.185196    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:58.185230    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:58.295221    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:58.347505    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:58.685391    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:58.685449    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:58.795197    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:58.847075    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:59.186223    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:59.186913    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:59.294718    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:59.347543    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:26:59.685125    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:26:59.685247    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:26:59.796746    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:26:59.850024    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:00.187011    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:00.187438    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:00.297289    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:00.349247    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:00.685778    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:00.686893    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:00.795829    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:00.847720    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:01.184718    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:01.185056    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:01.295099    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:01.347150    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:01.685843    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:01.686024    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:01.794738    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:01.848336    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:02.184966    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:02.185151    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:02.294867    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:02.346559    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:02.842554    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:02.843446    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:02.843522    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:02.847202    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:03.185678    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:03.185678    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:03.296334    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:03.349732    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:03.685168    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:03.685594    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:03.796979    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:03.849130    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:04.188829    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:04.188953    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:04.296898    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:04.346951    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:04.684994    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:04.687735    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:04.795524    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:04.849656    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:05.185058    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:05.186278    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:05.296833    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:05.351420    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:05.792635    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:05.794625    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:05.795290    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:05.850965    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:06.186808    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:06.187656    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:06.296302    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:06.397319    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:06.689495    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:06.689671    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:06.795705    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:06.849348    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:07.186562    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:07.187017    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:07.295241    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:07.348427    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:07.686737    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:07.686966    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:07.801014    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:07.848054    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:08.329870    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:08.332882    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:08.333410    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:08.347792    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:08.685082    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:08.685377    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:08.795404    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:08.847436    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:09.188976    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:09.188975    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:09.294643    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:09.347481    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:09.685764    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:09.685764    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:09.794388    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:09.847431    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:10.185775    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:10.185866    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:10.294777    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:10.348197    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:10.690649    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:10.690898    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:10.800538    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:10.850128    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:11.186868    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:11.187434    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:11.295659    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:11.347952    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:11.688049    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:11.688550    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:11.795549    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:11.849498    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:12.185374    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:12.185655    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:12.295595    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:12.347430    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:12.685904    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:12.686352    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:12.794948    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:12.847115    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:13.187709    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:13.187914    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:13.295122    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:13.346918    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:13.684579    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:13.684859    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:13.801090    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:13.848351    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:14.185004    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:14.186551    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:14.296780    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:14.348156    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:14.687182    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:14.687380    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:14.795303    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:14.851137    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:15.186515    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:15.186561    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:15.297688    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:15.352089    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:15.685156    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:15.685740    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:15.794596    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:15.848314    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:16.184611    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 02:27:16.186302    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:16.294991    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:16.347445    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:16.685700    9897 kapi.go:107] duration metric: took 24.004662987s to wait for kubernetes.io/minikube-addons=registry ...
	I1216 02:27:16.685833    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:16.794271    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:16.847328    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:17.184986    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:17.294997    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:17.347702    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:17.684886    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:17.794604    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:17.847368    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:18.186797    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:18.296276    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:18.351082    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:18.685612    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:18.796793    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:18.848204    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:19.186384    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:19.295250    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:19.346984    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:19.685158    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:19.797022    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:19.848004    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:20.186138    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:20.296642    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:20.350759    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:20.684853    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:20.795407    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:20.848428    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:21.186530    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:21.294378    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:21.348145    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:21.684963    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:21.797706    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:21.850651    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:22.184615    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:22.295204    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:22.347839    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:22.687154    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:22.796239    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:22.847212    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:23.283533    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:23.298141    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:23.347312    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:23.685234    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:23.801411    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:23.849963    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:24.350276    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:24.351269    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:24.353679    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:24.685369    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:24.796259    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:24.848961    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:25.186255    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:25.544806    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:25.546850    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:25.686399    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:25.795492    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:25.848152    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:26.185471    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:26.296416    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:26.347898    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:26.685532    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:26.796322    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:26.848468    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:27.185263    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:27.294957    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:27.346558    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:27.686794    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:27.798173    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:27.846778    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:28.184446    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:28.296022    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:28.349180    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:28.685100    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:28.795170    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:28.847135    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:29.184818    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:29.294681    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:29.347969    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:29.684450    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:29.795017    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:29.846656    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:30.183963    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:30.294800    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:30.351338    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:30.687364    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:30.796099    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:30.848625    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:31.188061    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:31.296589    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:31.347757    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:31.691796    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:31.795838    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:31.848309    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:32.187836    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:32.297009    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:32.348058    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:32.685840    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:32.915398    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:32.917075    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:33.185129    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:33.296013    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:33.347519    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:33.687396    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:33.795898    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:33.846984    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:34.184497    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:34.295316    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:34.347410    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:34.687168    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:34.796998    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:34.849490    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:35.184825    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:35.296124    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:35.347910    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:36.056810    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:36.057049    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:36.057286    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:36.186890    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:36.294434    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:36.348373    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:36.685126    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:36.794678    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:36.851194    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:37.185074    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:37.294696    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:37.352827    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:37.688224    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:37.797395    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:37.848062    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:38.185685    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:38.295386    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:38.347301    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:38.684534    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:38.796400    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:38.849725    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:39.184652    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:39.296892    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:39.348486    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:39.686450    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:39.796788    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:39.896775    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:40.192122    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:40.296518    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:40.349134    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:40.685644    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:40.799333    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:40.848195    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:41.186352    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:41.295540    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:41.348400    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:41.685529    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:41.800268    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:41.849612    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:42.186015    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:42.296508    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:42.349123    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:42.687979    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:42.942671    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:42.943451    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:43.185915    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:43.297434    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:43.350640    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:43.689457    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:43.795100    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:43.847041    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:44.184856    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:44.294258    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:44.348342    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:44.685214    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:44.795766    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:44.896669    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:45.196476    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:45.296909    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:45.351400    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:45.685703    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:45.795139    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:45.849892    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:46.184376    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:46.295896    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:46.349935    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:46.689901    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:46.797530    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:46.847466    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:47.186759    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:47.298520    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:47.349755    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:47.684796    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:47.797429    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:47.846890    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:48.184617    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:48.295018    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:48.347659    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:48.687101    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:48.795874    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:48.853496    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:49.188199    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:49.295430    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:49.347655    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:49.684133    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:49.794435    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:49.847091    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:50.185834    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:50.295662    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:50.347197    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:50.686367    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:50.796226    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:50.847124    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:51.188617    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:51.296113    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:51.350732    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:51.685086    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:51.795815    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:51.846488    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:52.185811    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:52.295909    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:52.349599    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:52.683964    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:52.795018    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:52.850067    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:53.184650    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:53.295648    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:53.348572    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 02:27:53.685415    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:53.799428    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:53.847349    9897 kapi.go:107] duration metric: took 1m0.503592744s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1216 02:27:54.184770    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:54.295396    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:54.685130    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:54.794940    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:55.184935    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:55.294887    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:55.684754    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:55.795582    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:56.184695    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:56.294549    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:56.684825    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:56.794807    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:57.185516    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:57.296948    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:57.686245    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:57.799129    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:58.185271    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:58.295950    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:58.688024    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:58.795911    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:59.186294    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:59.297363    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:27:59.685095    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:27:59.797417    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:28:00.252892    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:28:00.297124    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:28:00.685557    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:28:00.797350    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:28:01.190671    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:28:01.297256    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:28:01.686596    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:28:01.869623    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:28:02.186168    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:28:02.296745    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:28:02.685566    9897 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 02:28:02.797045    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:28:03.191349    9897 kapi.go:107] duration metric: took 1m10.510585602s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1216 02:28:03.295656    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:28:03.795509    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:28:04.295644    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:28:04.795013    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:28:05.296142    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:28:05.795308    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:28:06.295895    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:28:06.799564    9897 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 02:28:07.296686    9897 kapi.go:107] duration metric: took 1m12.504944867s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1216 02:28:07.298077    9897 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-703051 cluster.
	I1216 02:28:07.299105    9897 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1216 02:28:07.300113    9897 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1216 02:28:07.301132    9897 out.go:179] * Enabled addons: inspektor-gadget, storage-provisioner, ingress-dns, nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, default-storageclass, metrics-server, registry-creds, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1216 02:28:07.302044    9897 addons.go:530] duration metric: took 1m23.610473322s for enable addons: enabled=[inspektor-gadget storage-provisioner ingress-dns nvidia-device-plugin amd-gpu-device-plugin cloud-spanner default-storageclass metrics-server registry-creds yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1216 02:28:07.302078    9897 start.go:247] waiting for cluster config update ...
	I1216 02:28:07.302093    9897 start.go:256] writing updated cluster config ...
	I1216 02:28:07.302375    9897 ssh_runner.go:195] Run: rm -f paused
	I1216 02:28:07.309144    9897 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 02:28:07.312141    9897 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4tgqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:28:07.317308    9897 pod_ready.go:94] pod "coredns-66bc5c9577-4tgqh" is "Ready"
	I1216 02:28:07.317325    9897 pod_ready.go:86] duration metric: took 5.162711ms for pod "coredns-66bc5c9577-4tgqh" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:28:07.318846    9897 pod_ready.go:83] waiting for pod "etcd-addons-703051" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:28:07.324159    9897 pod_ready.go:94] pod "etcd-addons-703051" is "Ready"
	I1216 02:28:07.324182    9897 pod_ready.go:86] duration metric: took 5.315081ms for pod "etcd-addons-703051" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:28:07.326004    9897 pod_ready.go:83] waiting for pod "kube-apiserver-addons-703051" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:28:07.331185    9897 pod_ready.go:94] pod "kube-apiserver-addons-703051" is "Ready"
	I1216 02:28:07.331204    9897 pod_ready.go:86] duration metric: took 5.180201ms for pod "kube-apiserver-addons-703051" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:28:07.333159    9897 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-703051" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:28:07.713347    9897 pod_ready.go:94] pod "kube-controller-manager-addons-703051" is "Ready"
	I1216 02:28:07.713373    9897 pod_ready.go:86] duration metric: took 380.194535ms for pod "kube-controller-manager-addons-703051" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:28:07.913876    9897 pod_ready.go:83] waiting for pod "kube-proxy-mwxm8" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:28:08.313545    9897 pod_ready.go:94] pod "kube-proxy-mwxm8" is "Ready"
	I1216 02:28:08.313569    9897 pod_ready.go:86] duration metric: took 399.67289ms for pod "kube-proxy-mwxm8" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:28:08.513511    9897 pod_ready.go:83] waiting for pod "kube-scheduler-addons-703051" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:28:08.913351    9897 pod_ready.go:94] pod "kube-scheduler-addons-703051" is "Ready"
	I1216 02:28:08.913373    9897 pod_ready.go:86] duration metric: took 399.840179ms for pod "kube-scheduler-addons-703051" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 02:28:08.913383    9897 pod_ready.go:40] duration metric: took 1.604212346s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 02:28:08.958847    9897 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 02:28:08.960560    9897 out.go:179] * Done! kubectl is now configured to use "addons-703051" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.113536256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f979fb2d-e6e4-4bed-a093-e092231dcee6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.113754496Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f979fb2d-e6e4-4bed-a093-e092231dcee6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.114526481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71f3caa0dccaaf15497314a0ad0e9ccbf3feff771407550dfdde32cffc5bb271,PodSandboxId:763cd2ef52e008c64f72b0e0585a9bddf5ab53d237c0ba56ffa04410abe9e9e7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765852130940239679,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ecb2063-1677-48a3-8f27-ea2c7d5c93c6,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fbe6406af877a88a5083785c255d7e941dc2100c87d2fc6cfca0295fcbf1ed,PodSandboxId:cc4c3965fc757dd2531aee75116597d3e9e942c22508cbb90365f3a8debd3d62,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765852093102579336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58286223-3023-49e8-8c96-fbc4885799ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ce59b95ef32ae51897233591dba179882a1f2328a1d30205e066849cf2a740,PodSandboxId:d42faa16e2e656936396d9bdfb7b4ab0880c76777ce14a42e2c74f01a30a1629,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765852082870372180,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-shbcn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 09f33b70-4a2a-46bb-a669-c394ee4e50c4,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fab5c50c31014edbab8ede19b9ad93d96b39f7bbe924aede82f25a1e0aa588aa,PodSandboxId:10bfcaff868baf43197a9a8bb30550d60d00cffdc9fb220f9860726512f39291,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765852064753226316,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-srpnh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4639c7bb-d0f5-4de6-806a-ff36ea0d752d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087e5f624b27317d64265d94dfc14af129d6a16e2f2c8f8e3d8e80a9b212dbf0,PodSandboxId:422ae0ee2fb42db23a7cf7b51ec47d90d301557c04ce1fded8d56dde54cf7204,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765852061239569953,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vvbgk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0cdb4ee2-5764-461b-84ca-30bb4d8bc4a1,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23048578564ae1325eaf7e84c45f992ede79ae60b89ec635455ddbd5863f8280,PodSandboxId:6a4e7dd7c377bb16d5fe8b55760021f98ecdda13d1ec76aaa682b9fbab40faf5,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765852048345143921,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-b7pbm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 77521070-52a4-4796-8faf-799cc1b59cf3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0454ea8b922d25a2c9ca5713444c8978a81d6f934fa812292f554fcb1c77b80a,PodSandboxId:e8bbdea8ec749fac6e023c0ddb4527803d3bdbbaa3940c3a85425ccd6de375fe,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6c
d76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765852045655169508,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e6df2b-852c-4522-9103-54ca55c8c849,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bdaf5c98cdf2241fc96d2c03e80eae9e6104c632934288d1163b5799e7b6fed,PodSandboxId:350b7ed773819dc009f07d78eaba173d2aef24c1a6bd11602c3ba477c728be21,Metada
ta:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765852020565783471,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-4fpsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03ef77d5-d326-4953-8e23-ca6c08e8e512,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68acdb398858e7f163118bca702ed13a99eefaee99e6d8be83f3af1ec90af7b,PodSandboxId:72fd8159eb8bc5dcf2d94955296dc5d
9178c97870421baf64f47c79cbd89db57,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765852011543363498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2daa6974-1bd3-4976-87ef-c939bb232e93,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0388da6adb851f0d46886e354c87f018e40e3a963fb20cccea3430b926c6eccc,PodSandboxId:612b308ed418e0081ba2e4d9eae66bb5bf4a82aab17
4b9b7164ed9dd0dc8bd78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765852005645418287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4tgqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edb0229-7f11-4e58-90a8-01dc7c8fe069,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1947cc0b3ab5e45d542ffca511b910b70e9b09ab19381d77587d1ffa064d6217,PodSandboxId:7b4be50a94b2d8987deaa5ee13f1789213654058adee5ef63a2d716a0c1ba8fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765852004934657951,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwxm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 064f9463-1ca7-46b8-8428-a3450e6a50a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f075f2bc2541e1114ca94b40540568b39a2ac8c5851d908739adbca47426b40,PodSandboxId:6058440c99aced855334ebbda5756776003af605bd06cb693332dbd9d0bb621f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765851991956369609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245b1a6750f2209a55a4a1baaaa78ec8,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20246f1c56f2b5aef9e3deb5898ea3b6f1a8c8732832f664d0c1ce79f0e058d4,PodSandboxId:62ceeff5c74e41e9cd844e4626e237f47ea62c7547a8bce67f18d048891c3762,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765851991963554720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8811ee76d1eb677a2bf71e866b381a00,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4ee09f2d08e04fafa64043bf9167661a9fac75aa2828ba1df68e1e9ac9d42d,PodSandboxId:e20407cf0464e587a7706f66ab4840fc86f700c347fdff7d35da90102aae0f09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765851991901228998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-703051,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: a94cf4464d1003900ad58539e89badef,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960abe7ee91b99d086a9808387f532e974c02cfe22515b93b199b699e1435675,PodSandboxId:825ff8245730e9248a69f1bbe4fef00205d0bd8900b2f07c16a68a75156e5031,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765851991892436142,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986649e00b22f8edb5a55a6ff7bf1f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f979fb2d-e6e4-4bed-a093-e092231dcee6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.135459817Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/1.0" file="docker/docker_client.go:631"
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.147933925Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2d1b80e6-dcab-4733-893c-25955cbe27f0 name=/runtime.v1.RuntimeService/Version
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.148147444Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2d1b80e6-dcab-4733-893c-25955cbe27f0 name=/runtime.v1.RuntimeService/Version
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.149809250Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=31bff1ea-17bf-49e0-b903-848c69793724 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.150951948Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765852273150928936,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=31bff1ea-17bf-49e0-b903-848c69793724 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.151855848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4004ff82-0029-4e68-90f4-afaebb61cbb9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.151922694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4004ff82-0029-4e68-90f4-afaebb61cbb9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.152233721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71f3caa0dccaaf15497314a0ad0e9ccbf3feff771407550dfdde32cffc5bb271,PodSandboxId:763cd2ef52e008c64f72b0e0585a9bddf5ab53d237c0ba56ffa04410abe9e9e7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765852130940239679,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ecb2063-1677-48a3-8f27-ea2c7d5c93c6,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fbe6406af877a88a5083785c255d7e941dc2100c87d2fc6cfca0295fcbf1ed,PodSandboxId:cc4c3965fc757dd2531aee75116597d3e9e942c22508cbb90365f3a8debd3d62,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765852093102579336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58286223-3023-49e8-8c96-fbc4885799ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ce59b95ef32ae51897233591dba179882a1f2328a1d30205e066849cf2a740,PodSandboxId:d42faa16e2e656936396d9bdfb7b4ab0880c76777ce14a42e2c74f01a30a1629,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765852082870372180,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-shbcn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 09f33b70-4a2a-46bb-a669-c394ee4e50c4,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fab5c50c31014edbab8ede19b9ad93d96b39f7bbe924aede82f25a1e0aa588aa,PodSandboxId:10bfcaff868baf43197a9a8bb30550d60d00cffdc9fb220f9860726512f39291,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765852064753226316,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-srpnh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4639c7bb-d0f5-4de6-806a-ff36ea0d752d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087e5f624b27317d64265d94dfc14af129d6a16e2f2c8f8e3d8e80a9b212dbf0,PodSandboxId:422ae0ee2fb42db23a7cf7b51ec47d90d301557c04ce1fded8d56dde54cf7204,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765852061239569953,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vvbgk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0cdb4ee2-5764-461b-84ca-30bb4d8bc4a1,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23048578564ae1325eaf7e84c45f992ede79ae60b89ec635455ddbd5863f8280,PodSandboxId:6a4e7dd7c377bb16d5fe8b55760021f98ecdda13d1ec76aaa682b9fbab40faf5,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765852048345143921,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-b7pbm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 77521070-52a4-4796-8faf-799cc1b59cf3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0454ea8b922d25a2c9ca5713444c8978a81d6f934fa812292f554fcb1c77b80a,PodSandboxId:e8bbdea8ec749fac6e023c0ddb4527803d3bdbbaa3940c3a85425ccd6de375fe,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6c
d76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765852045655169508,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e6df2b-852c-4522-9103-54ca55c8c849,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bdaf5c98cdf2241fc96d2c03e80eae9e6104c632934288d1163b5799e7b6fed,PodSandboxId:350b7ed773819dc009f07d78eaba173d2aef24c1a6bd11602c3ba477c728be21,Metada
ta:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765852020565783471,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-4fpsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03ef77d5-d326-4953-8e23-ca6c08e8e512,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68acdb398858e7f163118bca702ed13a99eefaee99e6d8be83f3af1ec90af7b,PodSandboxId:72fd8159eb8bc5dcf2d94955296dc5d
9178c97870421baf64f47c79cbd89db57,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765852011543363498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2daa6974-1bd3-4976-87ef-c939bb232e93,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0388da6adb851f0d46886e354c87f018e40e3a963fb20cccea3430b926c6eccc,PodSandboxId:612b308ed418e0081ba2e4d9eae66bb5bf4a82aab17
4b9b7164ed9dd0dc8bd78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765852005645418287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4tgqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edb0229-7f11-4e58-90a8-01dc7c8fe069,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1947cc0b3ab5e45d542ffca511b910b70e9b09ab19381d77587d1ffa064d6217,PodSandboxId:7b4be50a94b2d8987deaa5ee13f1789213654058adee5ef63a2d716a0c1ba8fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765852004934657951,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwxm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 064f9463-1ca7-46b8-8428-a3450e6a50a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f075f2bc2541e1114ca94b40540568b39a2ac8c5851d908739adbca47426b40,PodSandboxId:6058440c99aced855334ebbda5756776003af605bd06cb693332dbd9d0bb621f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765851991956369609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245b1a6750f2209a55a4a1baaaa78ec8,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20246f1c56f2b5aef9e3deb5898ea3b6f1a8c8732832f664d0c1ce79f0e058d4,PodSandboxId:62ceeff5c74e41e9cd844e4626e237f47ea62c7547a8bce67f18d048891c3762,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765851991963554720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8811ee76d1eb677a2bf71e866b381a00,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4ee09f2d08e04fafa64043bf9167661a9fac75aa2828ba1df68e1e9ac9d42d,PodSandboxId:e20407cf0464e587a7706f66ab4840fc86f700c347fdff7d35da90102aae0f09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765851991901228998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-703051,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: a94cf4464d1003900ad58539e89badef,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960abe7ee91b99d086a9808387f532e974c02cfe22515b93b199b699e1435675,PodSandboxId:825ff8245730e9248a69f1bbe4fef00205d0bd8900b2f07c16a68a75156e5031,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765851991892436142,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986649e00b22f8edb5a55a6ff7bf1f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4004ff82-0029-4e68-90f4-afaebb61cbb9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.183804962Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb43fcf6-dd82-496a-b427-b4384ee02d35 name=/runtime.v1.RuntimeService/Version
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.183874464Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb43fcf6-dd82-496a-b427-b4384ee02d35 name=/runtime.v1.RuntimeService/Version
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.185176700Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=009dda2d-fdbc-4d9e-9463-714d3b35356d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.186965520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765852273186879371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=009dda2d-fdbc-4d9e-9463-714d3b35356d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.187979887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16bad539-0e62-48c7-80d5-25ac99a91990 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.188033558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16bad539-0e62-48c7-80d5-25ac99a91990 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.188320826Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71f3caa0dccaaf15497314a0ad0e9ccbf3feff771407550dfdde32cffc5bb271,PodSandboxId:763cd2ef52e008c64f72b0e0585a9bddf5ab53d237c0ba56ffa04410abe9e9e7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765852130940239679,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ecb2063-1677-48a3-8f27-ea2c7d5c93c6,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fbe6406af877a88a5083785c255d7e941dc2100c87d2fc6cfca0295fcbf1ed,PodSandboxId:cc4c3965fc757dd2531aee75116597d3e9e942c22508cbb90365f3a8debd3d62,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765852093102579336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58286223-3023-49e8-8c96-fbc4885799ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ce59b95ef32ae51897233591dba179882a1f2328a1d30205e066849cf2a740,PodSandboxId:d42faa16e2e656936396d9bdfb7b4ab0880c76777ce14a42e2c74f01a30a1629,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765852082870372180,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-shbcn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 09f33b70-4a2a-46bb-a669-c394ee4e50c4,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fab5c50c31014edbab8ede19b9ad93d96b39f7bbe924aede82f25a1e0aa588aa,PodSandboxId:10bfcaff868baf43197a9a8bb30550d60d00cffdc9fb220f9860726512f39291,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765852064753226316,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-srpnh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4639c7bb-d0f5-4de6-806a-ff36ea0d752d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087e5f624b27317d64265d94dfc14af129d6a16e2f2c8f8e3d8e80a9b212dbf0,PodSandboxId:422ae0ee2fb42db23a7cf7b51ec47d90d301557c04ce1fded8d56dde54cf7204,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765852061239569953,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vvbgk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0cdb4ee2-5764-461b-84ca-30bb4d8bc4a1,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23048578564ae1325eaf7e84c45f992ede79ae60b89ec635455ddbd5863f8280,PodSandboxId:6a4e7dd7c377bb16d5fe8b55760021f98ecdda13d1ec76aaa682b9fbab40faf5,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765852048345143921,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-b7pbm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 77521070-52a4-4796-8faf-799cc1b59cf3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0454ea8b922d25a2c9ca5713444c8978a81d6f934fa812292f554fcb1c77b80a,PodSandboxId:e8bbdea8ec749fac6e023c0ddb4527803d3bdbbaa3940c3a85425ccd6de375fe,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6c
d76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765852045655169508,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e6df2b-852c-4522-9103-54ca55c8c849,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bdaf5c98cdf2241fc96d2c03e80eae9e6104c632934288d1163b5799e7b6fed,PodSandboxId:350b7ed773819dc009f07d78eaba173d2aef24c1a6bd11602c3ba477c728be21,Metada
ta:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765852020565783471,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-4fpsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03ef77d5-d326-4953-8e23-ca6c08e8e512,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68acdb398858e7f163118bca702ed13a99eefaee99e6d8be83f3af1ec90af7b,PodSandboxId:72fd8159eb8bc5dcf2d94955296dc5d
9178c97870421baf64f47c79cbd89db57,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765852011543363498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2daa6974-1bd3-4976-87ef-c939bb232e93,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0388da6adb851f0d46886e354c87f018e40e3a963fb20cccea3430b926c6eccc,PodSandboxId:612b308ed418e0081ba2e4d9eae66bb5bf4a82aab17
4b9b7164ed9dd0dc8bd78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765852005645418287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4tgqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edb0229-7f11-4e58-90a8-01dc7c8fe069,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1947cc0b3ab5e45d542ffca511b910b70e9b09ab19381d77587d1ffa064d6217,PodSandboxId:7b4be50a94b2d8987deaa5ee13f1789213654058adee5ef63a2d716a0c1ba8fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765852004934657951,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwxm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 064f9463-1ca7-46b8-8428-a3450e6a50a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f075f2bc2541e1114ca94b40540568b39a2ac8c5851d908739adbca47426b40,PodSandboxId:6058440c99aced855334ebbda5756776003af605bd06cb693332dbd9d0bb621f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765851991956369609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245b1a6750f2209a55a4a1baaaa78ec8,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20246f1c56f2b5aef9e3deb5898ea3b6f1a8c8732832f664d0c1ce79f0e058d4,PodSandboxId:62ceeff5c74e41e9cd844e4626e237f47ea62c7547a8bce67f18d048891c3762,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765851991963554720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8811ee76d1eb677a2bf71e866b381a00,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4ee09f2d08e04fafa64043bf9167661a9fac75aa2828ba1df68e1e9ac9d42d,PodSandboxId:e20407cf0464e587a7706f66ab4840fc86f700c347fdff7d35da90102aae0f09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765851991901228998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-703051,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: a94cf4464d1003900ad58539e89badef,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960abe7ee91b99d086a9808387f532e974c02cfe22515b93b199b699e1435675,PodSandboxId:825ff8245730e9248a69f1bbe4fef00205d0bd8900b2f07c16a68a75156e5031,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765851991892436142,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986649e00b22f8edb5a55a6ff7bf1f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16bad539-0e62-48c7-80d5-25ac99a91990 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.221146300Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34b0ffdd-5adf-447b-a913-3331bb5db8ce name=/runtime.v1.RuntimeService/Version
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.221270623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34b0ffdd-5adf-447b-a913-3331bb5db8ce name=/runtime.v1.RuntimeService/Version
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.222556416Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=324581e4-cc66-462d-bb24-01ebc7ccaf00 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.223768627Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765852273223743429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:545771,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=324581e4-cc66-462d-bb24-01ebc7ccaf00 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.224404455Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e3f8eeb-b76a-4aae-bbc3-4a7aeca853f2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.224484175Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e3f8eeb-b76a-4aae-bbc3-4a7aeca853f2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 02:31:13 addons-703051 crio[816]: time="2025-12-16 02:31:13.224835335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:71f3caa0dccaaf15497314a0ad0e9ccbf3feff771407550dfdde32cffc5bb271,PodSandboxId:763cd2ef52e008c64f72b0e0585a9bddf5ab53d237c0ba56ffa04410abe9e9e7,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c,State:CONTAINER_RUNNING,CreatedAt:1765852130940239679,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6ecb2063-1677-48a3-8f27-ea2c7d5c93c6,},Annotations:map[string]string{io.kubernetes.container.hash: 66bc2494,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"
}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0fbe6406af877a88a5083785c255d7e941dc2100c87d2fc6cfca0295fcbf1ed,PodSandboxId:cc4c3965fc757dd2531aee75116597d3e9e942c22508cbb90365f3a8debd3d62,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1765852093102579336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 58286223-3023-49e8-8c96-fbc4885799ab,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ce59b95ef32ae51897233591dba179882a1f2328a1d30205e066849cf2a740,PodSandboxId:d42faa16e2e656936396d9bdfb7b4ab0880c76777ce14a42e2c74f01a30a1629,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8043403e50094a07ba382a116497ecfb317f3196e9c8063c89be209f4f654810,State:CONTAINER_RUNNING,CreatedAt:1765852082870372180,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-85d4c799dd-shbcn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 09f33b70-4a2a-46bb-a669-c394ee4e50c4,},Annotations:map[string]string{io.kubernet
es.container.hash: 6f36061b,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:fab5c50c31014edbab8ede19b9ad93d96b39f7bbe924aede82f25a1e0aa588aa,PodSandboxId:10bfcaff868baf43197a9a8bb30550d60d00cffdc9fb220f9860726512f39291,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e52b
258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765852064753226316,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-srpnh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4639c7bb-d0f5-4de6-806a-ff36ea0d752d,},Annotations:map[string]string{io.kubernetes.container.hash: 66c411f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087e5f624b27317d64265d94dfc14af129d6a16e2f2c8f8e3d8e80a9b212dbf0,PodSandboxId:422ae0ee2fb42db23a7cf7b51ec47d90d301557c04ce1fded8d56dde54cf7204,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285,Annotations:map[string]string{},UserSpecifiedImage
:,RuntimeHandler:,},ImageRef:a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e,State:CONTAINER_EXITED,CreatedAt:1765852061239569953,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vvbgk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0cdb4ee2-5764-461b-84ca-30bb4d8bc4a1,},Annotations:map[string]string{io.kubernetes.container.hash: b83b2b1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23048578564ae1325eaf7e84c45f992ede79ae60b89ec635455ddbd5863f8280,PodSandboxId:6a4e7dd7c377bb16d5fe8b55760021f98ecdda13d1ec76aaa682b9fbab40faf5,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annota
tions:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1765852048345143921,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-b7pbm,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 77521070-52a4-4796-8faf-799cc1b59cf3,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0454ea8b922d25a2c9ca5713444c8978a81d6f934fa812292f554fcb1c77b80a,PodSandboxId:e8bbdea8ec749fac6e023c0ddb4527803d3bdbbaa3940c3a85425ccd6de375fe,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6c
d76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1765852045655169508,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e6df2b-852c-4522-9103-54ca55c8c849,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8bdaf5c98cdf2241fc96d2c03e80eae9e6104c632934288d1163b5799e7b6fed,PodSandboxId:350b7ed773819dc009f07d78eaba173d2aef24c1a6bd11602c3ba477c728be21,Metada
ta:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1765852020565783471,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-4fpsx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03ef77d5-d326-4953-8e23-ca6c08e8e512,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c68acdb398858e7f163118bca702ed13a99eefaee99e6d8be83f3af1ec90af7b,PodSandboxId:72fd8159eb8bc5dcf2d94955296dc5d
9178c97870421baf64f47c79cbd89db57,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765852011543363498,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2daa6974-1bd3-4976-87ef-c939bb232e93,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0388da6adb851f0d46886e354c87f018e40e3a963fb20cccea3430b926c6eccc,PodSandboxId:612b308ed418e0081ba2e4d9eae66bb5bf4a82aab17
4b9b7164ed9dd0dc8bd78,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765852005645418287,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-4tgqh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4edb0229-7f11-4e58-90a8-01dc7c8fe069,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1947cc0b3ab5e45d542ffca511b910b70e9b09ab19381d77587d1ffa064d6217,PodSandboxId:7b4be50a94b2d8987deaa5ee13f1789213654058adee5ef63a2d716a0c1ba8fe,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765852004934657951,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mwxm8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 064f9463-1ca7-46b8-8428-a3450e6a50a7,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f075f2bc2541e1114ca94b40540568b39a2ac8c5851d908739adbca47426b40,PodSandboxId:6058440c99aced855334ebbda5756776003af605bd06cb693332dbd9d0bb621f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765851991956369609,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 245b1a6750f2209a55a4a1baaaa78ec8,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\
",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20246f1c56f2b5aef9e3deb5898ea3b6f1a8c8732832f664d0c1ce79f0e058d4,PodSandboxId:62ceeff5c74e41e9cd844e4626e237f47ea62c7547a8bce67f18d048891c3762,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_RUNNING,CreatedAt:1765851991963554720,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8811ee76d1eb677a2bf71e866b381a00,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4ee09f2d08e04fafa64043bf9167661a9fac75aa2828ba1df68e1e9ac9d42d,PodSandboxId:e20407cf0464e587a7706f66ab4840fc86f700c347fdff7d35da90102aae0f09,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765851991901228998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-703051,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: a94cf4464d1003900ad58539e89badef,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:960abe7ee91b99d086a9808387f532e974c02cfe22515b93b199b699e1435675,PodSandboxId:825ff8245730e9248a69f1bbe4fef00205d0bd8900b2f07c16a68a75156e5031,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765851991892436142,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-703051,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 986649e00b22f8edb5a55a6ff7bf1f4f,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e3f8eeb-b76a-4aae-bbc3-4a7aeca853f2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	71f3caa0dccaa       public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff                           2 minutes ago       Running             nginx                     0                   763cd2ef52e00       nginx                                       default
	b0fbe6406af87       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   cc4c3965fc757       busybox                                     default
	e2ce59b95ef32       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad             3 minutes ago       Running             controller                0                   d42faa16e2e65       ingress-nginx-controller-85d4c799dd-shbcn   ingress-nginx
	fab5c50c31014       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              patch                     0                   10bfcaff868ba       ingress-nginx-admission-patch-srpnh         ingress-nginx
	087e5f624b273       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:03a00eb0e255e8a25fa49926c24cde0f7e12e8d072c445cdf5136ec78b546285   3 minutes ago       Exited              create                    0                   422ae0ee2fb42       ingress-nginx-admission-create-vvbgk        ingress-nginx
	23048578564ae       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             3 minutes ago       Running             local-path-provisioner    0                   6a4e7dd7c377b       local-path-provisioner-648f6765c9-b7pbm     local-path-storage
	0454ea8b922d2       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago       Running             minikube-ingress-dns      0                   e8bbdea8ec749       kube-ingress-dns-minikube                   kube-system
	8bdaf5c98cdf2       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   350b7ed773819       amd-gpu-device-plugin-4fpsx                 kube-system
	c68acdb398858       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   72fd8159eb8bc       storage-provisioner                         kube-system
	0388da6adb851       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   612b308ed418e       coredns-66bc5c9577-4tgqh                    kube-system
	1947cc0b3ab5e       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45                                                             4 minutes ago       Running             kube-proxy                0                   7b4be50a94b2d       kube-proxy-mwxm8                            kube-system
	20246f1c56f2b       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                             4 minutes ago       Running             etcd                      0                   62ceeff5c74e4       etcd-addons-703051                          kube-system
	5f075f2bc2541       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952                                                             4 minutes ago       Running             kube-scheduler            0                   6058440c99ace       kube-scheduler-addons-703051                kube-system
	fc4ee09f2d08e       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85                                                             4 minutes ago       Running             kube-apiserver            0                   e20407cf0464e       kube-apiserver-addons-703051                kube-system
	960abe7ee91b9       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8                                                             4 minutes ago       Running             kube-controller-manager   0                   825ff8245730e       kube-controller-manager-addons-703051       kube-system
	
	
	==> coredns [0388da6adb851f0d46886e354c87f018e40e3a963fb20cccea3430b926c6eccc] <==
	[INFO] 10.244.0.8:49772 - 36756 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000569298s
	[INFO] 10.244.0.8:49772 - 13828 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.001973974s
	[INFO] 10.244.0.8:49772 - 25037 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000128971s
	[INFO] 10.244.0.8:49772 - 19052 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000119224s
	[INFO] 10.244.0.8:49772 - 1028 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000093758s
	[INFO] 10.244.0.8:49772 - 52630 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000423177s
	[INFO] 10.244.0.8:49772 - 32871 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000343165s
	[INFO] 10.244.0.8:41144 - 60468 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000144941s
	[INFO] 10.244.0.8:41144 - 60115 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000200543s
	[INFO] 10.244.0.8:60638 - 13222 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000103536s
	[INFO] 10.244.0.8:60638 - 12978 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000250688s
	[INFO] 10.244.0.8:56950 - 4105 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111668s
	[INFO] 10.244.0.8:56950 - 4540 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000195887s
	[INFO] 10.244.0.8:49764 - 61881 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000093118s
	[INFO] 10.244.0.8:49764 - 62076 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000411734s
	[INFO] 10.244.0.23:46640 - 59151 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000634874s
	[INFO] 10.244.0.23:38898 - 53246 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000228146s
	[INFO] 10.244.0.23:59176 - 40575 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000280787s
	[INFO] 10.244.0.23:49933 - 48090 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000110978s
	[INFO] 10.244.0.23:34431 - 55636 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000139635s
	[INFO] 10.244.0.23:57158 - 59196 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000260344s
	[INFO] 10.244.0.23:56169 - 37233 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00145696s
	[INFO] 10.244.0.23:38331 - 48625 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.002505553s
	[INFO] 10.244.0.28:40445 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000567211s
	[INFO] 10.244.0.28:56782 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000283659s
	
	
	==> describe nodes <==
	Name:               addons-703051
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-703051
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=addons-703051
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T02_26_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-703051
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 02:26:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-703051
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 02:31:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 02:29:12 +0000   Tue, 16 Dec 2025 02:26:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 02:29:12 +0000   Tue, 16 Dec 2025 02:26:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 02:29:12 +0000   Tue, 16 Dec 2025 02:26:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 02:29:12 +0000   Tue, 16 Dec 2025 02:26:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    addons-703051
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4ab45a7215f430ebc6b14f6c9c94339
	  System UUID:                c4ab45a7-215f-430e-bc6b-14f6c9c94339
	  Boot ID:                    354609bf-8610-43fc-90e7-f3e35d0f06fe
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	  default                     hello-world-app-5d498dc89-8b4zv              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-shbcn    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m21s
	  kube-system                 amd-gpu-device-plugin-4fpsx                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 coredns-66bc5c9577-4tgqh                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m29s
	  kube-system                 etcd-addons-703051                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m35s
	  kube-system                 kube-apiserver-addons-703051                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 kube-controller-manager-addons-703051        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-proxy-mwxm8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-scheduler-addons-703051                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m24s
	  local-path-storage          local-path-provisioner-648f6765c9-b7pbm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m27s                  kube-proxy       
	  Normal  Starting                 4m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m42s (x8 over 4m42s)  kubelet          Node addons-703051 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m42s (x8 over 4m42s)  kubelet          Node addons-703051 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m42s (x7 over 4m42s)  kubelet          Node addons-703051 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m35s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m35s                  kubelet          Node addons-703051 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m35s                  kubelet          Node addons-703051 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m35s                  kubelet          Node addons-703051 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m34s                  kubelet          Node addons-703051 status is now: NodeReady
	  Normal  RegisteredNode           4m30s                  node-controller  Node addons-703051 event: Registered Node addons-703051 in Controller
	
	
	==> dmesg <==
	[  +0.000033] kauditd_printk_skb: 348 callbacks suppressed
	[  +0.741289] kauditd_printk_skb: 428 callbacks suppressed
	[Dec16 02:27] kauditd_printk_skb: 227 callbacks suppressed
	[  +5.656396] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.843452] kauditd_printk_skb: 38 callbacks suppressed
	[ +13.586585] kauditd_printk_skb: 32 callbacks suppressed
	[  +7.937861] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.114481] kauditd_printk_skb: 107 callbacks suppressed
	[  +1.535550] kauditd_printk_skb: 85 callbacks suppressed
	[  +1.602796] kauditd_printk_skb: 146 callbacks suppressed
	[  +0.000039] kauditd_printk_skb: 35 callbacks suppressed
	[Dec16 02:28] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.000029] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.311996] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.000089] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.779101] kauditd_printk_skb: 107 callbacks suppressed
	[  +0.002160] kauditd_printk_skb: 90 callbacks suppressed
	[  +2.169211] kauditd_printk_skb: 159 callbacks suppressed
	[  +0.661709] kauditd_printk_skb: 162 callbacks suppressed
	[  +0.932157] kauditd_printk_skb: 36 callbacks suppressed
	[Dec16 02:29] kauditd_printk_skb: 24 callbacks suppressed
	[  +8.989939] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000063] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.859218] kauditd_printk_skb: 41 callbacks suppressed
	[Dec16 02:31] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [20246f1c56f2b5aef9e3deb5898ea3b6f1a8c8732832f664d0c1ce79f0e058d4] <==
	{"level":"warn","ts":"2025-12-16T02:27:36.045599Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T02:27:35.678350Z","time spent":"367.239105ms","remote":"127.0.0.1:48036","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-12-16T02:27:36.045744Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"189.788142ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T02:27:36.045765Z","caller":"traceutil/trace.go:172","msg":"trace[549298602] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1021; }","duration":"189.80812ms","start":"2025-12-16T02:27:35.855950Z","end":"2025-12-16T02:27:36.045759Z","steps":["trace[549298602] 'range keys from in-memory index tree'  (duration: 189.745918ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T02:27:42.931007Z","caller":"traceutil/trace.go:172","msg":"trace[689026313] linearizableReadLoop","detail":"{readStateIndex:1081; appliedIndex:1081; }","duration":"141.335654ms","start":"2025-12-16T02:27:42.789655Z","end":"2025-12-16T02:27:42.930991Z","steps":["trace[689026313] 'read index received'  (duration: 141.331405ms)","trace[689026313] 'applied index is now lower than readState.Index'  (duration: 3.399µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T02:27:42.934890Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.451864ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T02:27:42.934953Z","caller":"traceutil/trace.go:172","msg":"trace[531332558] range","detail":"{range_begin:/registry/rolebindings; range_end:; response_count:0; response_revision:1055; }","duration":"118.528016ms","start":"2025-12-16T02:27:42.816415Z","end":"2025-12-16T02:27:42.934943Z","steps":["trace[531332558] 'agreement among raft nodes before linearized reading'  (duration: 118.434159ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T02:27:42.934890Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.213551ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T02:27:42.935003Z","caller":"traceutil/trace.go:172","msg":"trace[1860302882] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1055; }","duration":"145.343126ms","start":"2025-12-16T02:27:42.789651Z","end":"2025-12-16T02:27:42.934994Z","steps":["trace[1860302882] 'agreement among raft nodes before linearized reading'  (duration: 141.962033ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T02:27:51.540319Z","caller":"traceutil/trace.go:172","msg":"trace[1135183922] transaction","detail":"{read_only:false; response_revision:1136; number_of_response:1; }","duration":"105.510637ms","start":"2025-12-16T02:27:51.434793Z","end":"2025-12-16T02:27:51.540304Z","steps":["trace[1135183922] 'process raft request'  (duration: 105.423187ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T02:28:00.242867Z","caller":"traceutil/trace.go:172","msg":"trace[1306737295] linearizableReadLoop","detail":"{readStateIndex:1189; appliedIndex:1189; }","duration":"182.311851ms","start":"2025-12-16T02:28:00.060540Z","end":"2025-12-16T02:28:00.242852Z","steps":["trace[1306737295] 'read index received'  (duration: 182.302747ms)","trace[1306737295] 'applied index is now lower than readState.Index'  (duration: 8.008µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T02:28:00.244858Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.305258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T02:28:00.244911Z","caller":"traceutil/trace.go:172","msg":"trace[741134019] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:1159; }","duration":"184.365266ms","start":"2025-12-16T02:28:00.060537Z","end":"2025-12-16T02:28:00.244902Z","steps":["trace[741134019] 'agreement among raft nodes before linearized reading'  (duration: 182.394603ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T02:28:00.245204Z","caller":"traceutil/trace.go:172","msg":"trace[764059712] transaction","detail":"{read_only:false; response_revision:1160; number_of_response:1; }","duration":"188.392234ms","start":"2025-12-16T02:28:00.056804Z","end":"2025-12-16T02:28:00.245196Z","steps":["trace[764059712] 'process raft request'  (duration: 186.155367ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T02:28:31.553762Z","caller":"traceutil/trace.go:172","msg":"trace[1706003156] transaction","detail":"{read_only:false; response_revision:1358; number_of_response:1; }","duration":"147.959421ms","start":"2025-12-16T02:28:31.405783Z","end":"2025-12-16T02:28:31.553743Z","steps":["trace[1706003156] 'process raft request'  (duration: 147.161094ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T02:28:33.791128Z","caller":"traceutil/trace.go:172","msg":"trace[1821992009] transaction","detail":"{read_only:false; response_revision:1361; number_of_response:1; }","duration":"204.991733ms","start":"2025-12-16T02:28:33.586124Z","end":"2025-12-16T02:28:33.791116Z","steps":["trace[1821992009] 'process raft request'  (duration: 204.843372ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T02:28:41.425366Z","caller":"traceutil/trace.go:172","msg":"trace[1597314245] linearizableReadLoop","detail":"{readStateIndex:1475; appliedIndex:1475; }","duration":"252.486496ms","start":"2025-12-16T02:28:41.172823Z","end":"2025-12-16T02:28:41.425310Z","steps":["trace[1597314245] 'read index received'  (duration: 252.478072ms)","trace[1597314245] 'applied index is now lower than readState.Index'  (duration: 7.257µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-16T02:28:41.426010Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"352.450732ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" limit:1 ","response":"range_response_count:1 size:635"}
	{"level":"info","ts":"2025-12-16T02:28:41.426053Z","caller":"traceutil/trace.go:172","msg":"trace[1627952095] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:1; response_revision:1432; }","duration":"352.515496ms","start":"2025-12-16T02:28:41.073530Z","end":"2025-12-16T02:28:41.426045Z","steps":["trace[1627952095] 'range keys from in-memory index tree'  (duration: 352.316672ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T02:28:41.426086Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-16T02:28:41.073516Z","time spent":"352.560408ms","remote":"127.0.0.1:48062","response type":"/etcdserverpb.KV/Range","request count":0,"request size":87,"response count":1,"response size":658,"request content":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" limit:1 "}
	{"level":"warn","ts":"2025-12-16T02:28:41.427187Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"254.30788ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T02:28:41.428379Z","caller":"traceutil/trace.go:172","msg":"trace[1775890357] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1432; }","duration":"255.566134ms","start":"2025-12-16T02:28:41.172804Z","end":"2025-12-16T02:28:41.428370Z","steps":["trace[1775890357] 'agreement among raft nodes before linearized reading'  (duration: 252.699212ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-16T02:28:41.428561Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"186.764337ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshotcontents\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-16T02:28:41.428597Z","caller":"traceutil/trace.go:172","msg":"trace[735438485] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshotcontents; range_end:; response_count:0; response_revision:1433; }","duration":"186.803017ms","start":"2025-12-16T02:28:41.241786Z","end":"2025-12-16T02:28:41.428589Z","steps":["trace[735438485] 'agreement among raft nodes before linearized reading'  (duration: 186.620629ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T02:28:41.428096Z","caller":"traceutil/trace.go:172","msg":"trace[1823760032] transaction","detail":"{read_only:false; response_revision:1433; number_of_response:1; }","duration":"291.079186ms","start":"2025-12-16T02:28:41.137009Z","end":"2025-12-16T02:28:41.428088Z","steps":["trace[1823760032] 'process raft request'  (duration: 288.538653ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-16T02:28:41.429181Z","caller":"traceutil/trace.go:172","msg":"trace[1322004444] transaction","detail":"{read_only:false; response_revision:1434; number_of_response:1; }","duration":"108.910408ms","start":"2025-12-16T02:28:41.320263Z","end":"2025-12-16T02:28:41.429174Z","steps":["trace[1322004444] 'process raft request'  (duration: 108.700295ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:31:13 up 5 min,  0 users,  load average: 0.40, 0.78, 0.42
	Linux addons-703051 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:48:01 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [fc4ee09f2d08e04fafa64043bf9167661a9fac75aa2828ba1df68e1e9ac9d42d] <==
	W1216 02:27:13.061545       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1216 02:28:18.725750       1 conn.go:339] Error on socket receive: read tcp 192.168.39.237:8443->192.168.39.1:39058: use of closed network connection
	E1216 02:28:18.909892       1 conn.go:339] Error on socket receive: read tcp 192.168.39.237:8443->192.168.39.1:39086: use of closed network connection
	I1216 02:28:28.102185       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.41.31"}
	I1216 02:28:45.840837       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1216 02:28:46.006776       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.44.199"}
	I1216 02:29:08.994552       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1216 02:29:11.591039       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1216 02:29:39.078476       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 02:29:39.078542       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 02:29:39.105637       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 02:29:39.105736       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 02:29:39.116969       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 02:29:39.117012       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 02:29:39.136265       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 02:29:39.136346       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 02:29:39.162163       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 02:29:39.162398       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E1216 02:29:40.051521       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	W1216 02:29:40.117313       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	E1216 02:29:40.130969       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	W1216 02:29:40.163097       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1216 02:29:40.183753       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1216 02:29:40.278886       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I1216 02:31:12.203132       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.251.56"}
	
	
	==> kube-controller-manager [960abe7ee91b99d086a9808387f532e974c02cfe22515b93b199b699e1435675] <==
	E1216 02:29:44.864267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 02:29:47.652846       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 02:29:47.653807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 02:29:48.250007       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 02:29:48.250990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 02:29:50.035109       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 02:29:50.036207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 02:29:55.173398       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 02:29:55.174806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 02:29:56.110857       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 02:29:56.111861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 02:30:01.712447       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 02:30:01.713282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 02:30:10.129761       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 02:30:10.130744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 02:30:15.996370       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 02:30:15.997425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 02:30:20.374642       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 02:30:20.376199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 02:30:45.958258       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 02:30:45.959518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 02:30:53.413869       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 02:30:53.414882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1216 02:30:56.228862       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1216 02:30:56.229778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [1947cc0b3ab5e45d542ffca511b910b70e9b09ab19381d77587d1ffa064d6217] <==
	I1216 02:26:45.647536       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 02:26:45.748830       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 02:26:45.748912       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.237"]
	E1216 02:26:45.749024       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 02:26:46.051271       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1216 02:26:46.051340       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 02:26:46.051373       1 server_linux.go:132] "Using iptables Proxier"
	I1216 02:26:46.068175       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 02:26:46.081747       1 server.go:527] "Version info" version="v1.34.2"
	I1216 02:26:46.081782       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 02:26:46.088951       1 config.go:200] "Starting service config controller"
	I1216 02:26:46.088981       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 02:26:46.088996       1 config.go:106] "Starting endpoint slice config controller"
	I1216 02:26:46.089000       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 02:26:46.089009       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 02:26:46.089012       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 02:26:46.094399       1 config.go:309] "Starting node config controller"
	I1216 02:26:46.094426       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 02:26:46.094433       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 02:26:46.189999       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 02:26:46.190023       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 02:26:46.190069       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5f075f2bc2541e1114ca94b40540568b39a2ac8c5851d908739adbca47426b40] <==
	E1216 02:26:35.093106       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1216 02:26:35.093247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 02:26:35.093352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 02:26:35.093441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1216 02:26:35.093469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 02:26:35.093506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 02:26:35.093911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 02:26:35.093944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 02:26:35.093777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 02:26:35.900519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1216 02:26:35.903807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1216 02:26:35.908156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1216 02:26:35.908228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1216 02:26:35.917828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1216 02:26:35.935984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1216 02:26:36.109950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1216 02:26:36.139191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1216 02:26:36.174121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1216 02:26:36.243917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1216 02:26:36.248153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1216 02:26:36.288186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1216 02:26:36.304473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1216 02:26:36.357548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1216 02:26:36.362967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1216 02:26:38.477728       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 02:29:42 addons-703051 kubelet[1505]: I1216 02:29:42.315346    1505 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa8d34799b67e535cc6e8c3d14ce00e6d4b88d599323ff84e50f1e089eb02bf4"} err="failed to get container status \"aa8d34799b67e535cc6e8c3d14ce00e6d4b88d599323ff84e50f1e089eb02bf4\": rpc error: code = NotFound desc = could not find container \"aa8d34799b67e535cc6e8c3d14ce00e6d4b88d599323ff84e50f1e089eb02bf4\": container with ID starting with aa8d34799b67e535cc6e8c3d14ce00e6d4b88d599323ff84e50f1e089eb02bf4 not found: ID does not exist"
	Dec 16 02:29:42 addons-703051 kubelet[1505]: I1216 02:29:42.777927    1505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="023c04c8-d489-4194-8bf8-2f64df0827e2" path="/var/lib/kubelet/pods/023c04c8-d489-4194-8bf8-2f64df0827e2/volumes"
	Dec 16 02:29:42 addons-703051 kubelet[1505]: I1216 02:29:42.778292    1505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a6fa29b-fa31-4375-aeed-182a5dd53b2e" path="/var/lib/kubelet/pods/1a6fa29b-fa31-4375-aeed-182a5dd53b2e/volumes"
	Dec 16 02:29:42 addons-703051 kubelet[1505]: I1216 02:29:42.778773    1505 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3416c953-7a2b-4c86-b00e-d9bd8a5a3cbd" path="/var/lib/kubelet/pods/3416c953-7a2b-4c86-b00e-d9bd8a5a3cbd/volumes"
	Dec 16 02:29:49 addons-703051 kubelet[1505]: E1216 02:29:49.167104    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852189165635244 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:29:49 addons-703051 kubelet[1505]: E1216 02:29:49.167129    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852189165635244 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:29:59 addons-703051 kubelet[1505]: E1216 02:29:59.169625    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852199169032919 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:29:59 addons-703051 kubelet[1505]: E1216 02:29:59.169656    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852199169032919 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:30:09 addons-703051 kubelet[1505]: E1216 02:30:09.172590    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852209172164163 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:30:09 addons-703051 kubelet[1505]: E1216 02:30:09.172631    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852209172164163 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:30:19 addons-703051 kubelet[1505]: E1216 02:30:19.175266    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852219174803660 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:30:19 addons-703051 kubelet[1505]: E1216 02:30:19.175291    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852219174803660 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:30:29 addons-703051 kubelet[1505]: E1216 02:30:29.178082    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852229177595506 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:30:29 addons-703051 kubelet[1505]: E1216 02:30:29.178116    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852229177595506 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:30:39 addons-703051 kubelet[1505]: E1216 02:30:39.182213    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852239181479359 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:30:39 addons-703051 kubelet[1505]: E1216 02:30:39.182250    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852239181479359 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:30:40 addons-703051 kubelet[1505]: I1216 02:30:40.775510    1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-4fpsx" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 02:30:49 addons-703051 kubelet[1505]: E1216 02:30:49.184891    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852249184538291 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:30:49 addons-703051 kubelet[1505]: E1216 02:30:49.184915    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852249184538291 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:30:54 addons-703051 kubelet[1505]: I1216 02:30:54.773932    1505 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 02:30:59 addons-703051 kubelet[1505]: E1216 02:30:59.187783    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852259187199865 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:30:59 addons-703051 kubelet[1505]: E1216 02:30:59.187820    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852259187199865 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:31:09 addons-703051 kubelet[1505]: E1216 02:31:09.191641    1505 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765852269190957268 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:31:09 addons-703051 kubelet[1505]: E1216 02:31:09.191899    1505 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765852269190957268 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:545771} inodes_used:{value:187}}"
	Dec 16 02:31:12 addons-703051 kubelet[1505]: I1216 02:31:12.227093    1505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvm7w\" (UniqueName: \"kubernetes.io/projected/2fc9da29-e194-4963-9517-d1288ba2b8a8-kube-api-access-nvm7w\") pod \"hello-world-app-5d498dc89-8b4zv\" (UID: \"2fc9da29-e194-4963-9517-d1288ba2b8a8\") " pod="default/hello-world-app-5d498dc89-8b4zv"
	
	
	==> storage-provisioner [c68acdb398858e7f163118bca702ed13a99eefaee99e6d8be83f3af1ec90af7b] <==
	W1216 02:30:48.793908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:30:50.798878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:30:50.805341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:30:52.808874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:30:52.813357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:30:54.816428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:30:54.823498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:30:56.826745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:30:56.831605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:30:58.835449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:30:58.845545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:31:00.849389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:31:00.856037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:31:02.859070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:31:02.866177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:31:04.870051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:31:04.874585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:31:06.877388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:31:06.884105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:31:08.887384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:31:08.892257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:31:10.895849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:31:10.902594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:31:12.907420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1216 02:31:12.913367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-703051 -n addons-703051
helpers_test.go:270: (dbg) Run:  kubectl --context addons-703051 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: hello-world-app-5d498dc89-8b4zv ingress-nginx-admission-create-vvbgk ingress-nginx-admission-patch-srpnh
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-703051 describe pod hello-world-app-5d498dc89-8b4zv ingress-nginx-admission-create-vvbgk ingress-nginx-admission-patch-srpnh
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-703051 describe pod hello-world-app-5d498dc89-8b4zv ingress-nginx-admission-create-vvbgk ingress-nginx-admission-patch-srpnh: exit status 1 (70.949435ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-8b4zv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-703051/192.168.39.237
	Start Time:       Tue, 16 Dec 2025 02:31:12 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nvm7w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-nvm7w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-8b4zv to addons-703051
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-vvbgk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-srpnh" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-703051 describe pod hello-world-app-5d498dc89-8b4zv ingress-nginx-admission-create-vvbgk ingress-nginx-admission-patch-srpnh: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-703051 addons disable ingress-dns --alsologtostderr -v=1: (1.727309562s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-703051 addons disable ingress --alsologtostderr -v=1: (7.69950441s)
--- FAIL: TestAddons/parallel/Ingress (158.06s)

                                                
                                    
x
+
TestPreload (144.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-235435 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio
E1216 03:22:52.679728    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:22:57.461795    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:23:09.599167    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-235435 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio: (1m29.205682442s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-235435 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-235435 image pull gcr.io/k8s-minikube/busybox: (3.367837822s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-235435
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-235435: (6.736611738s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-235435 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-235435 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (42.851825837s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-235435 image list
preload_test.go:73: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/kube-scheduler:v1.34.2
	registry.k8s.io/kube-proxy:v1.34.2
	registry.k8s.io/kube-controller-manager:v1.34.2
	registry.k8s.io/kube-apiserver:v1.34.2
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20250512-df8de77b

                                                
                                                
-- /stdout --
panic.go:615: *** TestPreload FAILED at 2025-12-16 03:24:04.10099645 +0000 UTC m=+3523.535659759
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-235435 -n test-preload-235435
helpers_test.go:253: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-235435 logs -n 25
helpers_test.go:261: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-496255 ssh -n multinode-496255-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-496255     │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh     │ multinode-496255 ssh -n multinode-496255 sudo cat /home/docker/cp-test_multinode-496255-m03_multinode-496255.txt                                          │ multinode-496255     │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ cp      │ multinode-496255 cp multinode-496255-m03:/home/docker/cp-test.txt multinode-496255-m02:/home/docker/cp-test_multinode-496255-m03_multinode-496255-m02.txt │ multinode-496255     │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh     │ multinode-496255 ssh -n multinode-496255-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-496255     │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ ssh     │ multinode-496255 ssh -n multinode-496255-m02 sudo cat /home/docker/cp-test_multinode-496255-m03_multinode-496255-m02.txt                                  │ multinode-496255     │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ node    │ multinode-496255 node stop m03                                                                                                                            │ multinode-496255     │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ node    │ multinode-496255 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-496255     │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:11 UTC │
	│ node    │ list -p multinode-496255                                                                                                                                  │ multinode-496255     │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │                     │
	│ stop    │ -p multinode-496255                                                                                                                                       │ multinode-496255     │ jenkins │ v1.37.0 │ 16 Dec 25 03:11 UTC │ 16 Dec 25 03:14 UTC │
	│ start   │ -p multinode-496255 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-496255     │ jenkins │ v1.37.0 │ 16 Dec 25 03:14 UTC │ 16 Dec 25 03:16 UTC │
	│ node    │ list -p multinode-496255                                                                                                                                  │ multinode-496255     │ jenkins │ v1.37.0 │ 16 Dec 25 03:16 UTC │                     │
	│ node    │ multinode-496255 node delete m03                                                                                                                          │ multinode-496255     │ jenkins │ v1.37.0 │ 16 Dec 25 03:16 UTC │ 16 Dec 25 03:16 UTC │
	│ stop    │ multinode-496255 stop                                                                                                                                     │ multinode-496255     │ jenkins │ v1.37.0 │ 16 Dec 25 03:16 UTC │ 16 Dec 25 03:19 UTC │
	│ start   │ -p multinode-496255 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-496255     │ jenkins │ v1.37.0 │ 16 Dec 25 03:19 UTC │ 16 Dec 25 03:21 UTC │
	│ node    │ list -p multinode-496255                                                                                                                                  │ multinode-496255     │ jenkins │ v1.37.0 │ 16 Dec 25 03:21 UTC │                     │
	│ start   │ -p multinode-496255-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-496255-m02 │ jenkins │ v1.37.0 │ 16 Dec 25 03:21 UTC │                     │
	│ start   │ -p multinode-496255-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-496255-m03 │ jenkins │ v1.37.0 │ 16 Dec 25 03:21 UTC │ 16 Dec 25 03:21 UTC │
	│ node    │ add -p multinode-496255                                                                                                                                   │ multinode-496255     │ jenkins │ v1.37.0 │ 16 Dec 25 03:21 UTC │                     │
	│ delete  │ -p multinode-496255-m03                                                                                                                                   │ multinode-496255-m03 │ jenkins │ v1.37.0 │ 16 Dec 25 03:21 UTC │ 16 Dec 25 03:21 UTC │
	│ delete  │ -p multinode-496255                                                                                                                                       │ multinode-496255     │ jenkins │ v1.37.0 │ 16 Dec 25 03:21 UTC │ 16 Dec 25 03:21 UTC │
	│ start   │ -p test-preload-235435 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio                                │ test-preload-235435  │ jenkins │ v1.37.0 │ 16 Dec 25 03:21 UTC │ 16 Dec 25 03:23 UTC │
	│ image   │ test-preload-235435 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-235435  │ jenkins │ v1.37.0 │ 16 Dec 25 03:23 UTC │ 16 Dec 25 03:23 UTC │
	│ stop    │ -p test-preload-235435                                                                                                                                    │ test-preload-235435  │ jenkins │ v1.37.0 │ 16 Dec 25 03:23 UTC │ 16 Dec 25 03:23 UTC │
	│ start   │ -p test-preload-235435 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                          │ test-preload-235435  │ jenkins │ v1.37.0 │ 16 Dec 25 03:23 UTC │ 16 Dec 25 03:24 UTC │
	│ image   │ test-preload-235435 image list                                                                                                                            │ test-preload-235435  │ jenkins │ v1.37.0 │ 16 Dec 25 03:24 UTC │ 16 Dec 25 03:24 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:23:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:23:21.113529   35363 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:23:21.113634   35363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:23:21.113642   35363 out.go:374] Setting ErrFile to fd 2...
	I1216 03:23:21.113646   35363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:23:21.113834   35363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 03:23:21.114249   35363 out.go:368] Setting JSON to false
	I1216 03:23:21.115081   35363 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3946,"bootTime":1765851455,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:23:21.115126   35363 start.go:143] virtualization: kvm guest
	I1216 03:23:21.117664   35363 out.go:179] * [test-preload-235435] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:23:21.118978   35363 notify.go:221] Checking for updates...
	I1216 03:23:21.118996   35363 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:23:21.120232   35363 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:23:21.121565   35363 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 03:23:21.122697   35363 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 03:23:21.123939   35363 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:23:21.124992   35363 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:23:21.126330   35363 config.go:182] Loaded profile config "test-preload-235435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:23:21.126775   35363 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:23:21.159939   35363 out.go:179] * Using the kvm2 driver based on existing profile
	I1216 03:23:21.161097   35363 start.go:309] selected driver: kvm2
	I1216 03:23:21.161119   35363 start.go:927] validating driver "kvm2" against &{Name:test-preload-235435 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.34.2 ClusterName:test-preload-235435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:23:21.161239   35363 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:23:21.162135   35363 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:23:21.162158   35363 cni.go:84] Creating CNI manager for ""
	I1216 03:23:21.162204   35363 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 03:23:21.162240   35363 start.go:353] cluster config:
	{Name:test-preload-235435 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:test-preload-235435 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:23:21.162321   35363 iso.go:125] acquiring lock: {Name:mk055aa36b1051bc664b283a8a6fb2af4db94c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:23:21.164256   35363 out.go:179] * Starting "test-preload-235435" primary control-plane node in "test-preload-235435" cluster
	I1216 03:23:21.165272   35363 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:23:21.165305   35363 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 03:23:21.165315   35363 cache.go:65] Caching tarball of preloaded images
	I1216 03:23:21.165377   35363 preload.go:238] Found /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 03:23:21.165387   35363 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 03:23:21.165464   35363 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/test-preload-235435/config.json ...
	I1216 03:23:21.165640   35363 start.go:360] acquireMachinesLock for test-preload-235435: {Name:mk6501572e7fc03699ef9d932e34f995d8ad6f98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:23:21.165677   35363 start.go:364] duration metric: took 21.484µs to acquireMachinesLock for "test-preload-235435"
	I1216 03:23:21.165690   35363 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:23:21.165694   35363 fix.go:54] fixHost starting: 
	I1216 03:23:21.167234   35363 fix.go:112] recreateIfNeeded on test-preload-235435: state=Stopped err=<nil>
	W1216 03:23:21.167250   35363 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:23:21.168627   35363 out.go:252] * Restarting existing kvm2 VM for "test-preload-235435" ...
	I1216 03:23:21.168662   35363 main.go:143] libmachine: starting domain...
	I1216 03:23:21.168672   35363 main.go:143] libmachine: ensuring networks are active...
	I1216 03:23:21.169307   35363 main.go:143] libmachine: Ensuring network default is active
	I1216 03:23:21.169616   35363 main.go:143] libmachine: Ensuring network mk-test-preload-235435 is active
	I1216 03:23:21.169968   35363 main.go:143] libmachine: getting domain XML...
	I1216 03:23:21.170837   35363 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-235435</name>
	  <uuid>c10c842f-73ae-43f1-bd33-6123e9382e02</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/test-preload-235435/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/test-preload-235435/test-preload-235435.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:65:d6:79'/>
	      <source network='mk-test-preload-235435'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:c6:bb:d1'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1216 03:23:22.401673   35363 main.go:143] libmachine: waiting for domain to start...
	I1216 03:23:22.403117   35363 main.go:143] libmachine: domain is now running
	I1216 03:23:22.403153   35363 main.go:143] libmachine: waiting for IP...
	I1216 03:23:22.404122   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:22.404678   35363 main.go:143] libmachine: domain test-preload-235435 has current primary IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:22.404695   35363 main.go:143] libmachine: found domain IP: 192.168.39.216
	I1216 03:23:22.404704   35363 main.go:143] libmachine: reserving static IP address...
	I1216 03:23:22.405160   35363 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-235435", mac: "52:54:00:65:d6:79", ip: "192.168.39.216"} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:21:56 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:22.405198   35363 main.go:143] libmachine: skip adding static IP to network mk-test-preload-235435 - found existing host DHCP lease matching {name: "test-preload-235435", mac: "52:54:00:65:d6:79", ip: "192.168.39.216"}
	I1216 03:23:22.405217   35363 main.go:143] libmachine: reserved static IP address 192.168.39.216 for domain test-preload-235435
	I1216 03:23:22.405229   35363 main.go:143] libmachine: waiting for SSH...
	I1216 03:23:22.405242   35363 main.go:143] libmachine: Getting to WaitForSSH function...
	I1216 03:23:22.407712   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:22.408110   35363 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:d6:79", ip: ""} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:21:56 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:22.408142   35363 main.go:143] libmachine: domain test-preload-235435 has defined IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:22.408356   35363 main.go:143] libmachine: Using SSH client type: native
	I1216 03:23:22.408576   35363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1216 03:23:22.408589   35363 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1216 03:23:25.504145   35363 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.216:22: connect: no route to host
	I1216 03:23:31.584177   35363 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.216:22: connect: no route to host
	I1216 03:23:34.687843   35363 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:23:34.690871   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:34.691261   35363 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:d6:79", ip: ""} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:23:31 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:34.691290   35363 main.go:143] libmachine: domain test-preload-235435 has defined IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:34.691517   35363 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/test-preload-235435/config.json ...
	I1216 03:23:34.691704   35363 machine.go:94] provisionDockerMachine start ...
	I1216 03:23:34.693609   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:34.693914   35363 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:d6:79", ip: ""} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:23:31 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:34.693957   35363 main.go:143] libmachine: domain test-preload-235435 has defined IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:34.694137   35363 main.go:143] libmachine: Using SSH client type: native
	I1216 03:23:34.694346   35363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1216 03:23:34.694358   35363 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:23:34.794737   35363 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 03:23:34.794760   35363 buildroot.go:166] provisioning hostname "test-preload-235435"
	I1216 03:23:34.797235   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:34.797624   35363 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:d6:79", ip: ""} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:23:31 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:34.797653   35363 main.go:143] libmachine: domain test-preload-235435 has defined IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:34.797874   35363 main.go:143] libmachine: Using SSH client type: native
	I1216 03:23:34.798082   35363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1216 03:23:34.798094   35363 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-235435 && echo "test-preload-235435" | sudo tee /etc/hostname
	I1216 03:23:34.913035   35363 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-235435
	
	I1216 03:23:34.915996   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:34.916420   35363 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:d6:79", ip: ""} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:23:31 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:34.916456   35363 main.go:143] libmachine: domain test-preload-235435 has defined IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:34.916662   35363 main.go:143] libmachine: Using SSH client type: native
	I1216 03:23:34.916898   35363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1216 03:23:34.916915   35363 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-235435' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-235435/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-235435' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:23:35.025296   35363 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:23:35.025330   35363 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5036/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5036/.minikube}
	I1216 03:23:35.025368   35363 buildroot.go:174] setting up certificates
	I1216 03:23:35.025384   35363 provision.go:84] configureAuth start
	I1216 03:23:35.028071   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.028489   35363 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:d6:79", ip: ""} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:23:31 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:35.028521   35363 main.go:143] libmachine: domain test-preload-235435 has defined IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.030540   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.030869   35363 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:d6:79", ip: ""} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:23:31 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:35.030886   35363 main.go:143] libmachine: domain test-preload-235435 has defined IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.031011   35363 provision.go:143] copyHostCerts
	I1216 03:23:35.031059   35363 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5036/.minikube/ca.pem, removing ...
	I1216 03:23:35.031072   35363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5036/.minikube/ca.pem
	I1216 03:23:35.031136   35363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/ca.pem (1078 bytes)
	I1216 03:23:35.031240   35363 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5036/.minikube/cert.pem, removing ...
	I1216 03:23:35.031250   35363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5036/.minikube/cert.pem
	I1216 03:23:35.031291   35363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/cert.pem (1123 bytes)
	I1216 03:23:35.031402   35363 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5036/.minikube/key.pem, removing ...
	I1216 03:23:35.031411   35363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5036/.minikube/key.pem
	I1216 03:23:35.031442   35363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/key.pem (1679 bytes)
	I1216 03:23:35.031505   35363 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem org=jenkins.test-preload-235435 san=[127.0.0.1 192.168.39.216 localhost minikube test-preload-235435]
	I1216 03:23:35.137328   35363 provision.go:177] copyRemoteCerts
	I1216 03:23:35.137382   35363 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:23:35.139601   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.139981   35363 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:d6:79", ip: ""} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:23:31 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:35.139999   35363 main.go:143] libmachine: domain test-preload-235435 has defined IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.140130   35363 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/test-preload-235435/id_rsa Username:docker}
	I1216 03:23:35.221551   35363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:23:35.250601   35363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1216 03:23:35.276484   35363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1216 03:23:35.302492   35363 provision.go:87] duration metric: took 277.089261ms to configureAuth
	I1216 03:23:35.302511   35363 buildroot.go:189] setting minikube options for container-runtime
	I1216 03:23:35.302667   35363 config.go:182] Loaded profile config "test-preload-235435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:23:35.305358   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.305733   35363 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:d6:79", ip: ""} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:23:31 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:35.305751   35363 main.go:143] libmachine: domain test-preload-235435 has defined IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.305896   35363 main.go:143] libmachine: Using SSH client type: native
	I1216 03:23:35.306083   35363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1216 03:23:35.306097   35363 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:23:35.536191   35363 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:23:35.536233   35363 machine.go:97] duration metric: took 844.515873ms to provisionDockerMachine
	I1216 03:23:35.536247   35363 start.go:293] postStartSetup for "test-preload-235435" (driver="kvm2")
	I1216 03:23:35.536259   35363 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:23:35.536326   35363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:23:35.539122   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.539491   35363 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:d6:79", ip: ""} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:23:31 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:35.539523   35363 main.go:143] libmachine: domain test-preload-235435 has defined IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.539664   35363 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/test-preload-235435/id_rsa Username:docker}
	I1216 03:23:35.625453   35363 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:23:35.629747   35363 info.go:137] Remote host: Buildroot 2025.02
	I1216 03:23:35.629769   35363 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5036/.minikube/addons for local assets ...
	I1216 03:23:35.629833   35363 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5036/.minikube/files for local assets ...
	I1216 03:23:35.629953   35363 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/ssl/certs/89742.pem -> 89742.pem in /etc/ssl/certs
	I1216 03:23:35.630079   35363 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:23:35.640573   35363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/ssl/certs/89742.pem --> /etc/ssl/certs/89742.pem (1708 bytes)
	I1216 03:23:35.666468   35363 start.go:296] duration metric: took 130.21019ms for postStartSetup
	I1216 03:23:35.666503   35363 fix.go:56] duration metric: took 14.500808261s for fixHost
	I1216 03:23:35.669175   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.669644   35363 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:d6:79", ip: ""} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:23:31 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:35.669676   35363 main.go:143] libmachine: domain test-preload-235435 has defined IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.669855   35363 main.go:143] libmachine: Using SSH client type: native
	I1216 03:23:35.670117   35363 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.39.216 22 <nil> <nil>}
	I1216 03:23:35.670132   35363 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1216 03:23:35.770469   35363 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765855415.737287979
	
	I1216 03:23:35.770486   35363 fix.go:216] guest clock: 1765855415.737287979
	I1216 03:23:35.770493   35363 fix.go:229] Guest: 2025-12-16 03:23:35.737287979 +0000 UTC Remote: 2025-12-16 03:23:35.666507181 +0000 UTC m=+14.597429195 (delta=70.780798ms)
	I1216 03:23:35.770511   35363 fix.go:200] guest clock delta is within tolerance: 70.780798ms
	I1216 03:23:35.770522   35363 start.go:83] releasing machines lock for "test-preload-235435", held for 14.604830891s
	I1216 03:23:35.773306   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.773668   35363 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:d6:79", ip: ""} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:23:31 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:35.773698   35363 main.go:143] libmachine: domain test-preload-235435 has defined IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.774173   35363 ssh_runner.go:195] Run: cat /version.json
	I1216 03:23:35.774242   35363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:23:35.777209   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.777316   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.777656   35363 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:d6:79", ip: ""} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:23:31 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:35.777688   35363 main.go:143] libmachine: domain test-preload-235435 has defined IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.777661   35363 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:d6:79", ip: ""} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:23:31 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:35.777768   35363 main.go:143] libmachine: domain test-preload-235435 has defined IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:35.777846   35363 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/test-preload-235435/id_rsa Username:docker}
	I1216 03:23:35.778093   35363 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/test-preload-235435/id_rsa Username:docker}
	I1216 03:23:35.882206   35363 ssh_runner.go:195] Run: systemctl --version
	I1216 03:23:35.887827   35363 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:23:36.032487   35363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:23:36.038731   35363 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:23:36.038799   35363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:23:36.056729   35363 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:23:36.056745   35363 start.go:496] detecting cgroup driver to use...
	I1216 03:23:36.056791   35363 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:23:36.074443   35363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:23:36.089196   35363 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:23:36.089261   35363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:23:36.104959   35363 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:23:36.120734   35363 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:23:36.259271   35363 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:23:36.470850   35363 docker.go:234] disabling docker service ...
	I1216 03:23:36.470907   35363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:23:36.487056   35363 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:23:36.500673   35363 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:23:36.657176   35363 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:23:36.789895   35363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:23:36.804769   35363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:23:36.824721   35363 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 03:23:36.824766   35363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:23:36.835837   35363 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 03:23:36.835899   35363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:23:36.846848   35363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:23:36.857632   35363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:23:36.868614   35363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:23:36.880324   35363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:23:36.891178   35363 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:23:36.909634   35363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:23:36.920763   35363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:23:36.930227   35363 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 03:23:36.930277   35363 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 03:23:36.949355   35363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:23:36.959719   35363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:23:37.095385   35363 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 03:23:37.192402   35363 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:23:37.192486   35363 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:23:37.197515   35363 start.go:564] Will wait 60s for crictl version
	I1216 03:23:37.197572   35363 ssh_runner.go:195] Run: which crictl
	I1216 03:23:37.201559   35363 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 03:23:37.242559   35363 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 03:23:37.242634   35363 ssh_runner.go:195] Run: crio --version
	I1216 03:23:37.272801   35363 ssh_runner.go:195] Run: crio --version
	I1216 03:23:37.299230   35363 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1216 03:23:37.302840   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:37.303227   35363 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:d6:79", ip: ""} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:23:31 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:37.303247   35363 main.go:143] libmachine: domain test-preload-235435 has defined IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:37.303435   35363 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1216 03:23:37.307400   35363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:23:37.321212   35363 kubeadm.go:884] updating cluster {Name:test-preload-235435 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.34.2 ClusterName:test-preload-235435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:23:37.321350   35363 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:23:37.321388   35363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:23:37.352188   35363 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1216 03:23:37.352242   35363 ssh_runner.go:195] Run: which lz4
	I1216 03:23:37.355883   35363 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 03:23:37.360127   35363 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 03:23:37.360155   35363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1216 03:23:38.488625   35363 crio.go:462] duration metric: took 1.132763947s to copy over tarball
	I1216 03:23:38.488704   35363 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 03:23:39.979280   35363 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.490544107s)
	I1216 03:23:39.979315   35363 crio.go:469] duration metric: took 1.490662585s to extract the tarball
	I1216 03:23:39.979328   35363 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 03:23:40.014298   35363 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:23:40.050132   35363 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:23:40.050153   35363 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:23:40.050161   35363 kubeadm.go:935] updating node { 192.168.39.216 8443 v1.34.2 crio true true} ...
	I1216 03:23:40.050248   35363 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-235435 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:test-preload-235435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 03:23:40.050313   35363 ssh_runner.go:195] Run: crio config
	I1216 03:23:40.099467   35363 cni.go:84] Creating CNI manager for ""
	I1216 03:23:40.099489   35363 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 03:23:40.099504   35363 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:23:40.099524   35363 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.216 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-235435 NodeName:test-preload-235435 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:23:40.099635   35363 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.216
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-235435"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.216"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.216"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:23:40.099691   35363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 03:23:40.110833   35363 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:23:40.110879   35363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:23:40.121515   35363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1216 03:23:40.139941   35363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 03:23:40.158137   35363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1216 03:23:40.176914   35363 ssh_runner.go:195] Run: grep 192.168.39.216	control-plane.minikube.internal$ /etc/hosts
	I1216 03:23:40.180710   35363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:23:40.193799   35363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:23:40.324639   35363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:23:40.354659   35363 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/test-preload-235435 for IP: 192.168.39.216
	I1216 03:23:40.354679   35363 certs.go:195] generating shared ca certs ...
	I1216 03:23:40.354694   35363 certs.go:227] acquiring lock for ca certs: {Name:mk77e952ddad6d1f2b7d1d07b6d50cdef35b56ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:23:40.354864   35363 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5036/.minikube/ca.key
	I1216 03:23:40.354903   35363 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.key
	I1216 03:23:40.354919   35363 certs.go:257] generating profile certs ...
	I1216 03:23:40.355018   35363 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/test-preload-235435/client.key
	I1216 03:23:40.355072   35363 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/test-preload-235435/apiserver.key.0e32c072
	I1216 03:23:40.355109   35363 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/test-preload-235435/proxy-client.key
	I1216 03:23:40.355212   35363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/8974.pem (1338 bytes)
	W1216 03:23:40.355244   35363 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5036/.minikube/certs/8974_empty.pem, impossibly tiny 0 bytes
	I1216 03:23:40.355253   35363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:23:40.355284   35363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:23:40.355311   35363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:23:40.355341   35363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem (1679 bytes)
	I1216 03:23:40.355397   35363 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/ssl/certs/89742.pem (1708 bytes)
	I1216 03:23:40.356063   35363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:23:40.396243   35363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:23:40.427902   35363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:23:40.455193   35363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:23:40.482968   35363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/test-preload-235435/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1216 03:23:40.510778   35363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/test-preload-235435/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1216 03:23:40.536607   35363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/test-preload-235435/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:23:40.563638   35363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/test-preload-235435/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 03:23:40.589613   35363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/ssl/certs/89742.pem --> /usr/share/ca-certificates/89742.pem (1708 bytes)
	I1216 03:23:40.615590   35363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:23:40.641463   35363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/certs/8974.pem --> /usr/share/ca-certificates/8974.pem (1338 bytes)
	I1216 03:23:40.667449   35363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:23:40.685632   35363 ssh_runner.go:195] Run: openssl version
	I1216 03:23:40.691345   35363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8974.pem
	I1216 03:23:40.701519   35363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8974.pem /etc/ssl/certs/8974.pem
	I1216 03:23:40.712039   35363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8974.pem
	I1216 03:23:40.716618   35363 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:36 /usr/share/ca-certificates/8974.pem
	I1216 03:23:40.716659   35363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8974.pem
	I1216 03:23:40.723267   35363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:23:40.733295   35363 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8974.pem /etc/ssl/certs/51391683.0
	I1216 03:23:40.743425   35363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89742.pem
	I1216 03:23:40.753413   35363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89742.pem /etc/ssl/certs/89742.pem
	I1216 03:23:40.763618   35363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89742.pem
	I1216 03:23:40.768207   35363 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:36 /usr/share/ca-certificates/89742.pem
	I1216 03:23:40.768269   35363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89742.pem
	I1216 03:23:40.774700   35363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:23:40.785059   35363 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89742.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:23:40.795405   35363 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:23:40.805668   35363 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:23:40.815828   35363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:23:40.820245   35363 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:23:40.820305   35363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:23:40.826717   35363 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:23:40.836973   35363 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:23:40.847075   35363 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:23:40.851638   35363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 03:23:40.858152   35363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 03:23:40.864517   35363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 03:23:40.871276   35363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 03:23:40.877770   35363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 03:23:40.884043   35363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 03:23:40.890372   35363 kubeadm.go:401] StartCluster: {Name:test-preload-235435 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
34.2 ClusterName:test-preload-235435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:23:40.890633   35363 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:23:40.890957   35363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:23:40.922739   35363 cri.go:89] found id: ""
	I1216 03:23:40.922818   35363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:23:40.934175   35363 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1216 03:23:40.934192   35363 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1216 03:23:40.934234   35363 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1216 03:23:40.944598   35363 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:23:40.945021   35363 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-235435" does not appear in /home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 03:23:40.945139   35363 kubeconfig.go:62] /home/jenkins/minikube-integration/22158-5036/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-235435" cluster setting kubeconfig missing "test-preload-235435" context setting]
	I1216 03:23:40.945441   35363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/kubeconfig: {Name:mk6832d71ef0ad581fa898dceefc2fcc2fd665b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:23:40.945953   35363 kapi.go:59] client config for test-preload-235435: &rest.Config{Host:"https://192.168.39.216:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-5036/.minikube/profiles/test-preload-235435/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-5036/.minikube/profiles/test-preload-235435/client.key", CAFile:"/home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 03:23:40.946372   35363 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1216 03:23:40.946390   35363 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1216 03:23:40.946395   35363 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1216 03:23:40.946400   35363 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1216 03:23:40.946410   35363 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1216 03:23:40.946753   35363 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1216 03:23:40.956735   35363 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.216
	I1216 03:23:40.956765   35363 kubeadm.go:1161] stopping kube-system containers ...
	I1216 03:23:40.956776   35363 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1216 03:23:40.956831   35363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:23:40.988914   35363 cri.go:89] found id: ""
	I1216 03:23:40.988990   35363 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 03:23:41.005428   35363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:23:41.016180   35363 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:23:41.016197   35363 kubeadm.go:158] found existing configuration files:
	
	I1216 03:23:41.016240   35363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:23:41.025951   35363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:23:41.026004   35363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:23:41.036224   35363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:23:41.045697   35363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:23:41.045746   35363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:23:41.055576   35363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:23:41.065023   35363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:23:41.065072   35363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:23:41.075032   35363 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:23:41.084432   35363 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:23:41.084479   35363 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:23:41.094550   35363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:23:41.104811   35363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:23:41.155223   35363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:23:42.258521   35363 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.103264164s)
	I1216 03:23:42.258595   35363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:23:42.498886   35363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:23:42.558195   35363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:23:42.632300   35363 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:23:42.632382   35363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:23:43.133049   35363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:23:43.633315   35363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:23:44.132572   35363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:23:44.632834   35363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:23:44.666030   35363 api_server.go:72] duration metric: took 2.033734336s to wait for apiserver process to appear ...
	I1216 03:23:44.666054   35363 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:23:44.666071   35363 api_server.go:253] Checking apiserver healthz at https://192.168.39.216:8443/healthz ...
	I1216 03:23:44.666512   35363 api_server.go:269] stopped: https://192.168.39.216:8443/healthz: Get "https://192.168.39.216:8443/healthz": dial tcp 192.168.39.216:8443: connect: connection refused
	I1216 03:23:45.166139   35363 api_server.go:253] Checking apiserver healthz at https://192.168.39.216:8443/healthz ...
	I1216 03:23:46.929054   35363 api_server.go:279] https://192.168.39.216:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 03:23:46.929089   35363 api_server.go:103] status: https://192.168.39.216:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 03:23:46.929107   35363 api_server.go:253] Checking apiserver healthz at https://192.168.39.216:8443/healthz ...
	I1216 03:23:47.030002   35363 api_server.go:279] https://192.168.39.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:23:47.030034   35363 api_server.go:103] status: https://192.168.39.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:23:47.166279   35363 api_server.go:253] Checking apiserver healthz at https://192.168.39.216:8443/healthz ...
	I1216 03:23:47.171071   35363 api_server.go:279] https://192.168.39.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:23:47.171104   35363 api_server.go:103] status: https://192.168.39.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:23:47.666805   35363 api_server.go:253] Checking apiserver healthz at https://192.168.39.216:8443/healthz ...
	I1216 03:23:47.672876   35363 api_server.go:279] https://192.168.39.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:23:47.672896   35363 api_server.go:103] status: https://192.168.39.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:23:48.166559   35363 api_server.go:253] Checking apiserver healthz at https://192.168.39.216:8443/healthz ...
	I1216 03:23:48.173695   35363 api_server.go:279] https://192.168.39.216:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:23:48.173718   35363 api_server.go:103] status: https://192.168.39.216:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:23:48.666367   35363 api_server.go:253] Checking apiserver healthz at https://192.168.39.216:8443/healthz ...
	I1216 03:23:48.674597   35363 api_server.go:279] https://192.168.39.216:8443/healthz returned 200:
	ok
	I1216 03:23:48.681463   35363 api_server.go:141] control plane version: v1.34.2
	I1216 03:23:48.681484   35363 api_server.go:131] duration metric: took 4.015424128s to wait for apiserver health ...
	I1216 03:23:48.681492   35363 cni.go:84] Creating CNI manager for ""
	I1216 03:23:48.681497   35363 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 03:23:48.683767   35363 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 03:23:48.684941   35363 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 03:23:48.699215   35363 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 03:23:48.734097   35363 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:23:48.744180   35363 system_pods.go:59] 7 kube-system pods found
	I1216 03:23:48.744225   35363 system_pods.go:61] "coredns-66bc5c9577-79bfz" [6218d547-f235-4b04-9843-bc5cde2cbca4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:23:48.744243   35363 system_pods.go:61] "etcd-test-preload-235435" [245fafa4-5e04-47d0-bb3a-dbfcd85ba46b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:23:48.744255   35363 system_pods.go:61] "kube-apiserver-test-preload-235435" [9862b1e4-f1e0-49fd-94f8-eb65a5509ad9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:23:48.744269   35363 system_pods.go:61] "kube-controller-manager-test-preload-235435" [e29640a5-743e-4412-b9fe-584e92265f78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 03:23:48.744285   35363 system_pods.go:61] "kube-proxy-pd48h" [1be7ea47-7cf5-4968-bbf7-6420fb88ea5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 03:23:48.744295   35363 system_pods.go:61] "kube-scheduler-test-preload-235435" [64d315e8-44b8-4abb-ba4a-b039040a536d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:23:48.744307   35363 system_pods.go:61] "storage-provisioner" [8f7d7d53-abc3-4f9b-b800-f7e88111e013] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:23:48.744320   35363 system_pods.go:74] duration metric: took 10.200103ms to wait for pod list to return data ...
	I1216 03:23:48.744332   35363 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:23:48.751448   35363 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 03:23:48.751479   35363 node_conditions.go:123] node cpu capacity is 2
	I1216 03:23:48.751496   35363 node_conditions.go:105] duration metric: took 7.157405ms to run NodePressure ...
	I1216 03:23:48.751558   35363 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:23:49.035460   35363 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1216 03:23:49.040031   35363 kubeadm.go:744] kubelet initialised
	I1216 03:23:49.040056   35363 kubeadm.go:745] duration metric: took 4.563763ms waiting for restarted kubelet to initialise ...
	I1216 03:23:49.040075   35363 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:23:49.060660   35363 ops.go:34] apiserver oom_adj: -16
	I1216 03:23:49.060687   35363 kubeadm.go:602] duration metric: took 8.126487884s to restartPrimaryControlPlane
	I1216 03:23:49.060700   35363 kubeadm.go:403] duration metric: took 8.170332262s to StartCluster
	I1216 03:23:49.060720   35363 settings.go:142] acquiring lock: {Name:mk546ecdfe1860ae68a814905b53e6453298b4fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:23:49.060816   35363 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 03:23:49.061645   35363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/kubeconfig: {Name:mk6832d71ef0ad581fa898dceefc2fcc2fd665b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:23:49.061981   35363 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.216 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:23:49.062134   35363 config.go:182] Loaded profile config "test-preload-235435": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:23:49.062088   35363 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:23:49.062180   35363 addons.go:70] Setting storage-provisioner=true in profile "test-preload-235435"
	I1216 03:23:49.062201   35363 addons.go:239] Setting addon storage-provisioner=true in "test-preload-235435"
	W1216 03:23:49.062209   35363 addons.go:248] addon storage-provisioner should already be in state true
	I1216 03:23:49.062210   35363 addons.go:70] Setting default-storageclass=true in profile "test-preload-235435"
	I1216 03:23:49.062237   35363 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-235435"
	I1216 03:23:49.062240   35363 host.go:66] Checking if "test-preload-235435" exists ...
	I1216 03:23:49.064226   35363 out.go:179] * Verifying Kubernetes components...
	I1216 03:23:49.064562   35363 kapi.go:59] client config for test-preload-235435: &rest.Config{Host:"https://192.168.39.216:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-5036/.minikube/profiles/test-preload-235435/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-5036/.minikube/profiles/test-preload-235435/client.key", CAFile:"/home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 03:23:49.064944   35363 addons.go:239] Setting addon default-storageclass=true in "test-preload-235435"
	W1216 03:23:49.064967   35363 addons.go:248] addon default-storageclass should already be in state true
	I1216 03:23:49.064989   35363 host.go:66] Checking if "test-preload-235435" exists ...
	I1216 03:23:49.065770   35363 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:23:49.065786   35363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:23:49.066768   35363 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:23:49.066782   35363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:23:49.066855   35363 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:23:49.066876   35363 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:23:49.069629   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:49.070052   35363 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:d6:79", ip: ""} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:23:31 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:49.070079   35363 main.go:143] libmachine: domain test-preload-235435 has defined IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:49.070238   35363 main.go:143] libmachine: domain test-preload-235435 has defined MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:49.070240   35363 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/test-preload-235435/id_rsa Username:docker}
	I1216 03:23:49.070747   35363 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:65:d6:79", ip: ""} in network mk-test-preload-235435: {Iface:virbr1 ExpiryTime:2025-12-16 04:23:31 +0000 UTC Type:0 Mac:52:54:00:65:d6:79 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:test-preload-235435 Clientid:01:52:54:00:65:d6:79}
	I1216 03:23:49.070783   35363 main.go:143] libmachine: domain test-preload-235435 has defined IP address 192.168.39.216 and MAC address 52:54:00:65:d6:79 in network mk-test-preload-235435
	I1216 03:23:49.071005   35363 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/test-preload-235435/id_rsa Username:docker}
	I1216 03:23:49.274151   35363 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:23:49.294291   35363 node_ready.go:35] waiting up to 6m0s for node "test-preload-235435" to be "Ready" ...
	I1216 03:23:49.337207   35363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:23:49.385724   35363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:23:50.037314   35363 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:23:50.038424   35363 addons.go:530] duration metric: took 976.342513ms for enable addons: enabled=[storage-provisioner default-storageclass]
	W1216 03:23:51.297281   35363 node_ready.go:57] node "test-preload-235435" has "Ready":"False" status (will retry)
	W1216 03:23:53.297430   35363 node_ready.go:57] node "test-preload-235435" has "Ready":"False" status (will retry)
	W1216 03:23:55.298504   35363 node_ready.go:57] node "test-preload-235435" has "Ready":"False" status (will retry)
	I1216 03:23:57.798268   35363 node_ready.go:49] node "test-preload-235435" is "Ready"
	I1216 03:23:57.798299   35363 node_ready.go:38] duration metric: took 8.503972228s for node "test-preload-235435" to be "Ready" ...
	I1216 03:23:57.798312   35363 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:23:57.798368   35363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:23:57.818651   35363 api_server.go:72] duration metric: took 8.756629497s to wait for apiserver process to appear ...
	I1216 03:23:57.818673   35363 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:23:57.818688   35363 api_server.go:253] Checking apiserver healthz at https://192.168.39.216:8443/healthz ...
	I1216 03:23:57.823959   35363 api_server.go:279] https://192.168.39.216:8443/healthz returned 200:
	ok
	I1216 03:23:57.824816   35363 api_server.go:141] control plane version: v1.34.2
	I1216 03:23:57.824835   35363 api_server.go:131] duration metric: took 6.156717ms to wait for apiserver health ...
	I1216 03:23:57.824842   35363 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:23:57.827988   35363 system_pods.go:59] 7 kube-system pods found
	I1216 03:23:57.828009   35363 system_pods.go:61] "coredns-66bc5c9577-79bfz" [6218d547-f235-4b04-9843-bc5cde2cbca4] Running
	I1216 03:23:57.828017   35363 system_pods.go:61] "etcd-test-preload-235435" [245fafa4-5e04-47d0-bb3a-dbfcd85ba46b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:23:57.828023   35363 system_pods.go:61] "kube-apiserver-test-preload-235435" [9862b1e4-f1e0-49fd-94f8-eb65a5509ad9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:23:57.828032   35363 system_pods.go:61] "kube-controller-manager-test-preload-235435" [e29640a5-743e-4412-b9fe-584e92265f78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 03:23:57.828036   35363 system_pods.go:61] "kube-proxy-pd48h" [1be7ea47-7cf5-4968-bbf7-6420fb88ea5f] Running
	I1216 03:23:57.828042   35363 system_pods.go:61] "kube-scheduler-test-preload-235435" [64d315e8-44b8-4abb-ba4a-b039040a536d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:23:57.828046   35363 system_pods.go:61] "storage-provisioner" [8f7d7d53-abc3-4f9b-b800-f7e88111e013] Running
	I1216 03:23:57.828051   35363 system_pods.go:74] duration metric: took 3.205262ms to wait for pod list to return data ...
	I1216 03:23:57.828060   35363 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:23:57.830533   35363 default_sa.go:45] found service account: "default"
	I1216 03:23:57.830549   35363 default_sa.go:55] duration metric: took 2.483764ms for default service account to be created ...
	I1216 03:23:57.830555   35363 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:23:57.832800   35363 system_pods.go:86] 7 kube-system pods found
	I1216 03:23:57.832819   35363 system_pods.go:89] "coredns-66bc5c9577-79bfz" [6218d547-f235-4b04-9843-bc5cde2cbca4] Running
	I1216 03:23:57.832826   35363 system_pods.go:89] "etcd-test-preload-235435" [245fafa4-5e04-47d0-bb3a-dbfcd85ba46b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:23:57.832845   35363 system_pods.go:89] "kube-apiserver-test-preload-235435" [9862b1e4-f1e0-49fd-94f8-eb65a5509ad9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:23:57.832852   35363 system_pods.go:89] "kube-controller-manager-test-preload-235435" [e29640a5-743e-4412-b9fe-584e92265f78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 03:23:57.832857   35363 system_pods.go:89] "kube-proxy-pd48h" [1be7ea47-7cf5-4968-bbf7-6420fb88ea5f] Running
	I1216 03:23:57.832867   35363 system_pods.go:89] "kube-scheduler-test-preload-235435" [64d315e8-44b8-4abb-ba4a-b039040a536d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:23:57.832873   35363 system_pods.go:89] "storage-provisioner" [8f7d7d53-abc3-4f9b-b800-f7e88111e013] Running
	I1216 03:23:57.832885   35363 system_pods.go:126] duration metric: took 2.324015ms to wait for k8s-apps to be running ...
	I1216 03:23:57.832892   35363 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:23:57.832948   35363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:23:57.848015   35363 system_svc.go:56] duration metric: took 15.121542ms WaitForService to wait for kubelet
	I1216 03:23:57.848031   35363 kubeadm.go:587] duration metric: took 8.786013276s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:23:57.848046   35363 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:23:57.849865   35363 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 03:23:57.849889   35363 node_conditions.go:123] node cpu capacity is 2
	I1216 03:23:57.849902   35363 node_conditions.go:105] duration metric: took 1.850562ms to run NodePressure ...
	I1216 03:23:57.849917   35363 start.go:242] waiting for startup goroutines ...
	I1216 03:23:57.849946   35363 start.go:247] waiting for cluster config update ...
	I1216 03:23:57.849960   35363 start.go:256] writing updated cluster config ...
	I1216 03:23:57.850222   35363 ssh_runner.go:195] Run: rm -f paused
	I1216 03:23:57.854806   35363 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:23:57.855232   35363 kapi.go:59] client config for test-preload-235435: &rest.Config{Host:"https://192.168.39.216:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-5036/.minikube/profiles/test-preload-235435/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-5036/.minikube/profiles/test-preload-235435/client.key", CAFile:"/home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 03:23:57.857235   35363 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-79bfz" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:23:57.861528   35363 pod_ready.go:94] pod "coredns-66bc5c9577-79bfz" is "Ready"
	I1216 03:23:57.861543   35363 pod_ready.go:86] duration metric: took 4.291628ms for pod "coredns-66bc5c9577-79bfz" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:23:57.863769   35363 pod_ready.go:83] waiting for pod "etcd-test-preload-235435" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 03:23:59.869937   35363 pod_ready.go:104] pod "etcd-test-preload-235435" is not "Ready", error: <nil>
	I1216 03:24:00.869868   35363 pod_ready.go:94] pod "etcd-test-preload-235435" is "Ready"
	I1216 03:24:00.869891   35363 pod_ready.go:86] duration metric: took 3.006109209s for pod "etcd-test-preload-235435" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:24:00.872257   35363 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-235435" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:24:00.876683   35363 pod_ready.go:94] pod "kube-apiserver-test-preload-235435" is "Ready"
	I1216 03:24:00.876699   35363 pod_ready.go:86] duration metric: took 4.425799ms for pod "kube-apiserver-test-preload-235435" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:24:00.878301   35363 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-235435" in "kube-system" namespace to be "Ready" or be gone ...
	W1216 03:24:02.885020   35363 pod_ready.go:104] pod "kube-controller-manager-test-preload-235435" is not "Ready", error: <nil>
	I1216 03:24:03.384169   35363 pod_ready.go:94] pod "kube-controller-manager-test-preload-235435" is "Ready"
	I1216 03:24:03.384194   35363 pod_ready.go:86] duration metric: took 2.505878592s for pod "kube-controller-manager-test-preload-235435" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:24:03.386048   35363 pod_ready.go:83] waiting for pod "kube-proxy-pd48h" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:24:03.390134   35363 pod_ready.go:94] pod "kube-proxy-pd48h" is "Ready"
	I1216 03:24:03.390150   35363 pod_ready.go:86] duration metric: took 4.086529ms for pod "kube-proxy-pd48h" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:24:03.458947   35363 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-235435" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:24:03.860026   35363 pod_ready.go:94] pod "kube-scheduler-test-preload-235435" is "Ready"
	I1216 03:24:03.860054   35363 pod_ready.go:86] duration metric: took 401.088913ms for pod "kube-scheduler-test-preload-235435" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:24:03.860066   35363 pod_ready.go:40] duration metric: took 6.005239813s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:24:03.901219   35363 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:24:03.903633   35363 out.go:179] * Done! kubectl is now configured to use "test-preload-235435" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.646576884Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:63d44ba93cdf8f3c0dc4920a1b2fbac0a572bcdb6df87dbee2d258b6017267b2,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-79bfz,Uid:6218d547-f235-4b04-9843-bc5cde2cbca4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765855435357255694,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-79bfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6218d547-f235-4b04-9843-bc5cde2cbca4,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-16T03:23:47.566935841Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:69211032a0dee4e5f3a2bd8fc585102548e5f16db1a7c8c717eb920568856076,Metadata:&PodSandboxMetadata{Name:kube-proxy-pd48h,Uid:1be7ea47-7cf5-4968-bbf7-6420fb88ea5f,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1765855427898373692,Labels:map[string]string{controller-revision-hash: 66d5f8d6f6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pd48h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be7ea47-7cf5-4968-bbf7-6420fb88ea5f,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-16T03:23:47.566941578Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8a2efd69feae06df8ba8a4cf28c47e4aec267022454dd80006fc659045d8d4f0,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:8f7d7d53-abc3-4f9b-b800-f7e88111e013,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765855427879399731,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f7d7d53-abc3-4f9b-b800-f7e8
8111e013,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-12-16T03:23:47.566933981Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:60b4340dd4c9594eda1015d573538aa65fb8d567ceffd9838f2a91340b6d7241,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-235435,Uid:5981415915cce3575
48ceaeacea185c9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765855424242453874,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5981415915cce357548ceaeacea185c9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.216:2379,kubernetes.io/config.hash: 5981415915cce357548ceaeacea185c9,kubernetes.io/config.seen: 2025-12-16T03:23:42.615783802Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9f11ae0bed10165777f61a802bab1a93e621359758d1ea122a8ca01397c98141,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-235435,Uid:6710eb0f600a4fe4ff9b434d48989ea7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765855424239547991,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-pr
eload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6710eb0f600a4fe4ff9b434d48989ea7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.216:8443,kubernetes.io/config.hash: 6710eb0f600a4fe4ff9b434d48989ea7,kubernetes.io/config.seen: 2025-12-16T03:23:42.573212038Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:20970b3d95bd1ae8512cc188959e3b8627ea8346db311a34cdc26828294a7303,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-235435,Uid:41952977983445e00011e0f366d1b77f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765855424232797730,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41952977983445e00011e0f366d1b77f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 41952977983445e
00011e0f366d1b77f,kubernetes.io/config.seen: 2025-12-16T03:23:42.573210977Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c89640a83da4d59f65c90f0dd2d1d6181e43a8e37306e0c62a1167fe8dbd6232,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-235435,Uid:4b0d8d9f23c9a6b7b731d73c70b7aca1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1765855424229186973,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0d8d9f23c9a6b7b731d73c70b7aca1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4b0d8d9f23c9a6b7b731d73c70b7aca1,kubernetes.io/config.seen: 2025-12-16T03:23:42.573206676Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=abf4c425-c011-4b5a-9052-ef3f8deb4d01 name=/runtime.v1.RuntimeService/ListPodSandbox

                                                
                                                
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.647599597Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=227f6367-bcf6-4d17-aaa2-bf7b07fe2af8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.647651088Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=227f6367-bcf6-4d17-aaa2-bf7b07fe2af8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.647840316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3615ac4f3d6584dff60de908be551cdef920e99fd6e076b3a980a57712e1d10b,PodSandboxId:63d44ba93cdf8f3c0dc4920a1b2fbac0a572bcdb6df87dbee2d258b6017267b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765855435568351239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-79bfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6218d547-f235-4b04-9843-bc5cde2cbca4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c815315f158f31fb00c8a13b1a8acd59212cac9147e07e51f1fe0988e9f658,PodSandboxId:69211032a0dee4e5f3a2bd8fc585102548e5f16db1a7c8c717eb920568856076,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765855428134336059,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pd48h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be7ea47-7cf5-4968-bbf7-6420fb88ea5f,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0989e85a7e7da5dda8f2e806f4825d6171fc9f42bc4cdf09c06b5e064bc80b19,PodSandboxId:8a2efd69feae06df8ba8a4cf28c47e4aec267022454dd80006fc659045d8d4f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765855428051310545,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f7d7d53-abc3-4f9b-b800-f7e88111e013,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779378df841a13fc83d60fbd206f1e55f1864b0a61bec96c86b981288eb56307,PodSandboxId:20970b3d95bd1ae8512cc188959e3b8627ea8346db311a34cdc26828294a7303,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765855424482454802,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41952977983445e00011e0f366d1b77f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c851b7cfe3c26916fe9c9a3156e0aad65090acc84c140e670982ebfb63934e42,PodSandboxId:60b4340dd4c9594eda1015d573538aa65fb8d567ceffd9838f2a91340b6d7241,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:
CONTAINER_RUNNING,CreatedAt:1765855424466693937,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5981415915cce357548ceaeacea185c9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31add11259c1f68e5505bc291a1f9dd056f79c07380f1c56d18f42003a533fa7,PodSandboxId:c89640a83da4d59f65c90f0dd2d1d6181e43a8e37306e0c62a1167fe8dbd6232,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765855424457176797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0d8d9f23c9a6b7b731d73c70b7aca1,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b92bfe56416a431ca973a8489ecaf128afaf1e05ce072e086f020bba30b377a,PodSandboxId:9f11ae0bed10165777f61a802bab1a93e621359758d1ea122a8ca01397c98141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765855424406704307,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6710eb0f600a4fe4ff9b434d48989ea7,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=227f6367-bcf6-4d17-aaa2-bf7b07fe2af8 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.673925714Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4ffe36dd-5907-4cf9-9fc9-908889504e48 name=/runtime.v1.RuntimeService/Version
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.674131833Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4ffe36dd-5907-4cf9-9fc9-908889504e48 name=/runtime.v1.RuntimeService/Version
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.675395195Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45ffc32c-d85a-49ec-bda5-ff6a8a4faf30 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.675946792Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765855444675926424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45ffc32c-d85a-49ec-bda5-ff6a8a4faf30 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.677012714Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1761714e-1098-44ee-9a6f-c9deeb7df6b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.677073612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1761714e-1098-44ee-9a6f-c9deeb7df6b0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.677234726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3615ac4f3d6584dff60de908be551cdef920e99fd6e076b3a980a57712e1d10b,PodSandboxId:63d44ba93cdf8f3c0dc4920a1b2fbac0a572bcdb6df87dbee2d258b6017267b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765855435568351239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-79bfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6218d547-f235-4b04-9843-bc5cde2cbca4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c815315f158f31fb00c8a13b1a8acd59212cac9147e07e51f1fe0988e9f658,PodSandboxId:69211032a0dee4e5f3a2bd8fc585102548e5f16db1a7c8c717eb920568856076,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765855428134336059,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pd48h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be7ea47-7cf5-4968-bbf7-6420fb88ea5f,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0989e85a7e7da5dda8f2e806f4825d6171fc9f42bc4cdf09c06b5e064bc80b19,PodSandboxId:8a2efd69feae06df8ba8a4cf28c47e4aec267022454dd80006fc659045d8d4f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765855428051310545,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f7d7d53-abc3-4f9b-b800-f7e88111e013,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779378df841a13fc83d60fbd206f1e55f1864b0a61bec96c86b981288eb56307,PodSandboxId:20970b3d95bd1ae8512cc188959e3b8627ea8346db311a34cdc26828294a7303,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765855424482454802,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41952977983445e00011e0f366d1b77f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c851b7cfe3c26916fe9c9a3156e0aad65090acc84c140e670982ebfb63934e42,PodSandboxId:60b4340dd4c9594eda1015d573538aa65fb8d567ceffd9838f2a91340b6d7241,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:
CONTAINER_RUNNING,CreatedAt:1765855424466693937,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5981415915cce357548ceaeacea185c9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31add11259c1f68e5505bc291a1f9dd056f79c07380f1c56d18f42003a533fa7,PodSandboxId:c89640a83da4d59f65c90f0dd2d1d6181e43a8e37306e0c62a1167fe8dbd6232,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765855424457176797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0d8d9f23c9a6b7b731d73c70b7aca1,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b92bfe56416a431ca973a8489ecaf128afaf1e05ce072e086f020bba30b377a,PodSandboxId:9f11ae0bed10165777f61a802bab1a93e621359758d1ea122a8ca01397c98141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765855424406704307,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6710eb0f600a4fe4ff9b434d48989ea7,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1761714e-1098-44ee-9a6f-c9deeb7df6b0 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.709036793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd46bdc0-bf79-4145-9a4c-3a97f2e99d11 name=/runtime.v1.RuntimeService/Version
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.709108464Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd46bdc0-bf79-4145-9a4c-3a97f2e99d11 name=/runtime.v1.RuntimeService/Version
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.710107240Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8ced518d-7d7b-4cb9-a3ac-be1ad1c39d78 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.710466209Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765855444710444217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ced518d-7d7b-4cb9-a3ac-be1ad1c39d78 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.711394760Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7a81796-d360-4255-9bbd-b6aa8a06aed0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.711780313Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7a81796-d360-4255-9bbd-b6aa8a06aed0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.712144684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3615ac4f3d6584dff60de908be551cdef920e99fd6e076b3a980a57712e1d10b,PodSandboxId:63d44ba93cdf8f3c0dc4920a1b2fbac0a572bcdb6df87dbee2d258b6017267b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765855435568351239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-79bfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6218d547-f235-4b04-9843-bc5cde2cbca4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c815315f158f31fb00c8a13b1a8acd59212cac9147e07e51f1fe0988e9f658,PodSandboxId:69211032a0dee4e5f3a2bd8fc585102548e5f16db1a7c8c717eb920568856076,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765855428134336059,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pd48h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be7ea47-7cf5-4968-bbf7-6420fb88ea5f,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0989e85a7e7da5dda8f2e806f4825d6171fc9f42bc4cdf09c06b5e064bc80b19,PodSandboxId:8a2efd69feae06df8ba8a4cf28c47e4aec267022454dd80006fc659045d8d4f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765855428051310545,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f7d7d53-abc3-4f9b-b800-f7e88111e013,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779378df841a13fc83d60fbd206f1e55f1864b0a61bec96c86b981288eb56307,PodSandboxId:20970b3d95bd1ae8512cc188959e3b8627ea8346db311a34cdc26828294a7303,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765855424482454802,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41952977983445e00011e0f366d1b77f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c851b7cfe3c26916fe9c9a3156e0aad65090acc84c140e670982ebfb63934e42,PodSandboxId:60b4340dd4c9594eda1015d573538aa65fb8d567ceffd9838f2a91340b6d7241,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:
CONTAINER_RUNNING,CreatedAt:1765855424466693937,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5981415915cce357548ceaeacea185c9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31add11259c1f68e5505bc291a1f9dd056f79c07380f1c56d18f42003a533fa7,PodSandboxId:c89640a83da4d59f65c90f0dd2d1d6181e43a8e37306e0c62a1167fe8dbd6232,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765855424457176797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0d8d9f23c9a6b7b731d73c70b7aca1,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b92bfe56416a431ca973a8489ecaf128afaf1e05ce072e086f020bba30b377a,PodSandboxId:9f11ae0bed10165777f61a802bab1a93e621359758d1ea122a8ca01397c98141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765855424406704307,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6710eb0f600a4fe4ff9b434d48989ea7,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7a81796-d360-4255-9bbd-b6aa8a06aed0 name=/runtime.v1.RuntimeServic
e/ListContainers
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.738270002Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cc5565e9-8050-4bcc-b6c4-ba83c1cc3c59 name=/runtime.v1.RuntimeService/Version
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.738338846Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cc5565e9-8050-4bcc-b6c4-ba83c1cc3c59 name=/runtime.v1.RuntimeService/Version
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.739856034Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9756293a-0485-4181-8eee-c8524fd29b8c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.740271537Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765855444740248565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:132143,},InodesUsed:&UInt64Value{Value:55,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9756293a-0485-4181-8eee-c8524fd29b8c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.741372338Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fcad293b-5131-4b18-bb97-8e4376fd5342 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.741478221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fcad293b-5131-4b18-bb97-8e4376fd5342 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:24:04 test-preload-235435 crio[833]: time="2025-12-16 03:24:04.741776736Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3615ac4f3d6584dff60de908be551cdef920e99fd6e076b3a980a57712e1d10b,PodSandboxId:63d44ba93cdf8f3c0dc4920a1b2fbac0a572bcdb6df87dbee2d258b6017267b2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765855435568351239,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-79bfz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6218d547-f235-4b04-9843-bc5cde2cbca4,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14c815315f158f31fb00c8a13b1a8acd59212cac9147e07e51f1fe0988e9f658,PodSandboxId:69211032a0dee4e5f3a2bd8fc585102548e5f16db1a7c8c717eb920568856076,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765855428134336059,Labels:map[st
ring]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pd48h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1be7ea47-7cf5-4968-bbf7-6420fb88ea5f,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0989e85a7e7da5dda8f2e806f4825d6171fc9f42bc4cdf09c06b5e064bc80b19,PodSandboxId:8a2efd69feae06df8ba8a4cf28c47e4aec267022454dd80006fc659045d8d4f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1765855428051310545,Labels:map[string]string{io.kubernete
s.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f7d7d53-abc3-4f9b-b800-f7e88111e013,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:779378df841a13fc83d60fbd206f1e55f1864b0a61bec96c86b981288eb56307,PodSandboxId:20970b3d95bd1ae8512cc188959e3b8627ea8346db311a34cdc26828294a7303,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765855424482454802,Labels:map[string]string{io.kubernetes.container.name:
kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 41952977983445e00011e0f366d1b77f,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c851b7cfe3c26916fe9c9a3156e0aad65090acc84c140e670982ebfb63934e42,PodSandboxId:60b4340dd4c9594eda1015d573538aa65fb8d567ceffd9838f2a91340b6d7241,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:
CONTAINER_RUNNING,CreatedAt:1765855424466693937,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5981415915cce357548ceaeacea185c9,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31add11259c1f68e5505bc291a1f9dd056f79c07380f1c56d18f42003a533fa7,PodSandboxId:c89640a83da4d59f65c90f0dd2d1d6181e43a8e37306e0c62a1167fe8dbd6232,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765855424457176797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0d8d9f23c9a6b7b731d73c70b7aca1,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b92bfe56416a431ca973a8489ecaf128afaf1e05ce072e086f020bba30b377a,PodSandboxId:9f11ae0bed10165777f61a802bab1a93e621359758d1ea122a8ca01397c98141,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&Im
ageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765855424406704307,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-235435,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6710eb0f600a4fe4ff9b434d48989ea7,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fcad293b-5131-4b18-bb97-8e4376fd5342 name=/runtime.v1.RuntimeServic
e/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                           NAMESPACE
	3615ac4f3d658       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   9 seconds ago       Running             coredns                   1                   63d44ba93cdf8       coredns-66bc5c9577-79bfz                      kube-system
	14c815315f158       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   16 seconds ago      Running             kube-proxy                1                   69211032a0dee       kube-proxy-pd48h                              kube-system
	0989e85a7e7da       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   8a2efd69feae0       storage-provisioner                           kube-system
	779378df841a1       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   20 seconds ago      Running             kube-scheduler            1                   20970b3d95bd1       kube-scheduler-test-preload-235435            kube-system
	c851b7cfe3c26       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   20 seconds ago      Running             etcd                      1                   60b4340dd4c95       etcd-test-preload-235435                      kube-system
	31add11259c1f       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   20 seconds ago      Running             kube-controller-manager   1                   c89640a83da4d       kube-controller-manager-test-preload-235435   kube-system
	4b92bfe56416a       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   20 seconds ago      Running             kube-apiserver            1                   9f11ae0bed101       kube-apiserver-test-preload-235435            kube-system
	
	
	==> coredns [3615ac4f3d6584dff60de908be551cdef920e99fd6e076b3a980a57712e1d10b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54425 - 27072 "HINFO IN 2345291614257574019.5879406071624031175. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031675247s
	
	
	==> describe nodes <==
	Name:               test-preload-235435
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-235435
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=test-preload-235435
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_22_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:22:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-235435
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:23:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:23:57 +0000   Tue, 16 Dec 2025 03:22:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:23:57 +0000   Tue, 16 Dec 2025 03:22:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:23:57 +0000   Tue, 16 Dec 2025 03:22:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 03:23:57 +0000   Tue, 16 Dec 2025 03:23:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.216
	  Hostname:    test-preload-235435
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 c10c842f73ae43f1bd336123e9382e02
	  System UUID:                c10c842f-73ae-43f1-bd33-6123e9382e02
	  Boot ID:                    cb57b420-c2c2-49bd-99c3-adf8e18d2e26
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-79bfz                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     92s
	  kube-system                 etcd-test-preload-235435                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         97s
	  kube-system                 kube-apiserver-test-preload-235435             250m (12%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-controller-manager-test-preload-235435    200m (10%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-proxy-pd48h                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-scheduler-test-preload-235435             100m (5%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 91s                  kube-proxy       
	  Normal   Starting                 16s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  104s (x8 over 104s)  kubelet          Node test-preload-235435 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    104s (x8 over 104s)  kubelet          Node test-preload-235435 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     104s (x7 over 104s)  kubelet          Node test-preload-235435 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  104s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 98s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     97s                  kubelet          Node test-preload-235435 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  97s                  kubelet          Node test-preload-235435 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    97s                  kubelet          Node test-preload-235435 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  97s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                96s                  kubelet          Node test-preload-235435 status is now: NodeReady
	  Normal   RegisteredNode           93s                  node-controller  Node test-preload-235435 event: Registered Node test-preload-235435 in Controller
	  Normal   Starting                 22s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-235435 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-235435 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-235435 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                  kubelet          Node test-preload-235435 has been rebooted, boot id: cb57b420-c2c2-49bd-99c3-adf8e18d2e26
	  Normal   RegisteredNode           14s                  node-controller  Node test-preload-235435 event: Registered Node test-preload-235435 in Controller
	
	
	==> dmesg <==
	[Dec16 03:23] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001643] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.003668] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.006287] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.109415] kauditd_printk_skb: 60 callbacks suppressed
	[  +0.099188] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.492038] kauditd_printk_skb: 168 callbacks suppressed
	[  +0.000025] kauditd_printk_skb: 128 callbacks suppressed
	[Dec16 03:24] kauditd_printk_skb: 65 callbacks suppressed
	
	
	==> etcd [c851b7cfe3c26916fe9c9a3156e0aad65090acc84c140e670982ebfb63934e42] <==
	{"level":"warn","ts":"2025-12-16T03:23:46.029634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.045119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.052558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.065042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.073166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.081474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.090791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.104509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.114582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.122221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.134208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.143030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.156508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.167403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.180009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.187057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.198054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.225052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.233249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.240823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.250319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.261479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.268697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.277002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:23:46.324012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51576","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:24:05 up 0 min,  0 users,  load average: 0.80, 0.22, 0.07
	Linux test-preload-235435 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:48:01 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4b92bfe56416a431ca973a8489ecaf128afaf1e05ce072e086f020bba30b377a] <==
	I1216 03:23:47.008321       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1216 03:23:47.008561       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1216 03:23:47.008607       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1216 03:23:47.008624       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1216 03:23:47.008702       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1216 03:23:47.008770       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 03:23:47.011368       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1216 03:23:47.011485       1 aggregator.go:171] initial CRD sync complete...
	I1216 03:23:47.011508       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 03:23:47.011514       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 03:23:47.011518       1 cache.go:39] Caches are synced for autoregister controller
	I1216 03:23:47.021419       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 03:23:47.024350       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1216 03:23:47.024393       1 policy_source.go:240] refreshing policies
	E1216 03:23:47.030587       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1216 03:23:47.080434       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:23:47.683429       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 03:23:47.895223       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 03:23:48.864458       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 03:23:48.895477       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1216 03:23:48.921536       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:23:48.927945       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:23:50.543061       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 03:23:50.792724       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 03:23:50.845050       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [31add11259c1f68e5505bc291a1f9dd056f79c07380f1c56d18f42003a533fa7] <==
	I1216 03:23:50.350734       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 03:23:50.350739       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 03:23:50.350744       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 03:23:50.350791       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1216 03:23:50.353256       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1216 03:23:50.357629       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1216 03:23:50.362236       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 03:23:50.369832       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 03:23:50.379040       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1216 03:23:50.381037       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 03:23:50.381047       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1216 03:23:50.384696       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 03:23:50.390193       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 03:23:50.390248       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 03:23:50.390269       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 03:23:50.390387       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1216 03:23:50.390703       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 03:23:50.390917       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1216 03:23:50.390834       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1216 03:23:50.390852       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1216 03:23:50.390859       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1216 03:23:50.390845       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1216 03:23:50.398620       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1216 03:23:50.409389       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 03:24:00.306252       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [14c815315f158f31fb00c8a13b1a8acd59212cac9147e07e51f1fe0988e9f658] <==
	I1216 03:23:48.380023       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 03:23:48.480284       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 03:23:48.480321       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.216"]
	E1216 03:23:48.480493       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 03:23:48.512877       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1216 03:23:48.512921       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 03:23:48.512945       1 server_linux.go:132] "Using iptables Proxier"
	I1216 03:23:48.520884       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 03:23:48.521269       1 server.go:527] "Version info" version="v1.34.2"
	I1216 03:23:48.521299       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:23:48.525941       1 config.go:309] "Starting node config controller"
	I1216 03:23:48.526004       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 03:23:48.526011       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 03:23:48.526141       1 config.go:200] "Starting service config controller"
	I1216 03:23:48.526149       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 03:23:48.526162       1 config.go:106] "Starting endpoint slice config controller"
	I1216 03:23:48.526166       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 03:23:48.526176       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 03:23:48.526180       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 03:23:48.627277       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 03:23:48.627307       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 03:23:48.627305       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [779378df841a13fc83d60fbd206f1e55f1864b0a61bec96c86b981288eb56307] <==
	I1216 03:23:45.762458       1 serving.go:386] Generated self-signed cert in-memory
	W1216 03:23:46.956141       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 03:23:46.956262       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 03:23:46.956279       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 03:23:46.956286       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 03:23:47.028646       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 03:23:47.028682       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:23:47.032248       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 03:23:47.032326       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 03:23:47.032383       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:23:47.032434       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:23:47.133590       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 16 03:23:47 test-preload-235435 kubelet[1181]: I1216 03:23:47.563903    1181 apiserver.go:52] "Watching apiserver"
	Dec 16 03:23:47 test-preload-235435 kubelet[1181]: E1216 03:23:47.568578    1181 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-79bfz" podUID="6218d547-f235-4b04-9843-bc5cde2cbca4"
	Dec 16 03:23:47 test-preload-235435 kubelet[1181]: I1216 03:23:47.599703    1181 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 16 03:23:47 test-preload-235435 kubelet[1181]: E1216 03:23:47.637889    1181 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Dec 16 03:23:47 test-preload-235435 kubelet[1181]: I1216 03:23:47.679861    1181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8f7d7d53-abc3-4f9b-b800-f7e88111e013-tmp\") pod \"storage-provisioner\" (UID: \"8f7d7d53-abc3-4f9b-b800-f7e88111e013\") " pod="kube-system/storage-provisioner"
	Dec 16 03:23:47 test-preload-235435 kubelet[1181]: I1216 03:23:47.679899    1181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1be7ea47-7cf5-4968-bbf7-6420fb88ea5f-xtables-lock\") pod \"kube-proxy-pd48h\" (UID: \"1be7ea47-7cf5-4968-bbf7-6420fb88ea5f\") " pod="kube-system/kube-proxy-pd48h"
	Dec 16 03:23:47 test-preload-235435 kubelet[1181]: I1216 03:23:47.679935    1181 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1be7ea47-7cf5-4968-bbf7-6420fb88ea5f-lib-modules\") pod \"kube-proxy-pd48h\" (UID: \"1be7ea47-7cf5-4968-bbf7-6420fb88ea5f\") " pod="kube-system/kube-proxy-pd48h"
	Dec 16 03:23:47 test-preload-235435 kubelet[1181]: E1216 03:23:47.680301    1181 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 16 03:23:47 test-preload-235435 kubelet[1181]: E1216 03:23:47.680415    1181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6218d547-f235-4b04-9843-bc5cde2cbca4-config-volume podName:6218d547-f235-4b04-9843-bc5cde2cbca4 nodeName:}" failed. No retries permitted until 2025-12-16 03:23:48.180396374 +0000 UTC m=+5.708185420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6218d547-f235-4b04-9843-bc5cde2cbca4-config-volume") pod "coredns-66bc5c9577-79bfz" (UID: "6218d547-f235-4b04-9843-bc5cde2cbca4") : object "kube-system"/"coredns" not registered
	Dec 16 03:23:47 test-preload-235435 kubelet[1181]: I1216 03:23:47.715565    1181 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-235435"
	Dec 16 03:23:47 test-preload-235435 kubelet[1181]: I1216 03:23:47.715631    1181 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-235435"
	Dec 16 03:23:47 test-preload-235435 kubelet[1181]: E1216 03:23:47.732431    1181 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-235435\" already exists" pod="kube-system/kube-scheduler-test-preload-235435"
	Dec 16 03:23:47 test-preload-235435 kubelet[1181]: E1216 03:23:47.734526    1181 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-test-preload-235435\" already exists" pod="kube-system/etcd-test-preload-235435"
	Dec 16 03:23:48 test-preload-235435 kubelet[1181]: E1216 03:23:48.184598    1181 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 16 03:23:48 test-preload-235435 kubelet[1181]: E1216 03:23:48.186322    1181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6218d547-f235-4b04-9843-bc5cde2cbca4-config-volume podName:6218d547-f235-4b04-9843-bc5cde2cbca4 nodeName:}" failed. No retries permitted until 2025-12-16 03:23:49.186299771 +0000 UTC m=+6.714088814 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6218d547-f235-4b04-9843-bc5cde2cbca4-config-volume") pod "coredns-66bc5c9577-79bfz" (UID: "6218d547-f235-4b04-9843-bc5cde2cbca4") : object "kube-system"/"coredns" not registered
	Dec 16 03:23:48 test-preload-235435 kubelet[1181]: E1216 03:23:48.646931    1181 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-79bfz" podUID="6218d547-f235-4b04-9843-bc5cde2cbca4"
	Dec 16 03:23:49 test-preload-235435 kubelet[1181]: E1216 03:23:49.188822    1181 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 16 03:23:49 test-preload-235435 kubelet[1181]: E1216 03:23:49.188905    1181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6218d547-f235-4b04-9843-bc5cde2cbca4-config-volume podName:6218d547-f235-4b04-9843-bc5cde2cbca4 nodeName:}" failed. No retries permitted until 2025-12-16 03:23:51.188888366 +0000 UTC m=+8.716677411 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6218d547-f235-4b04-9843-bc5cde2cbca4-config-volume") pod "coredns-66bc5c9577-79bfz" (UID: "6218d547-f235-4b04-9843-bc5cde2cbca4") : object "kube-system"/"coredns" not registered
	Dec 16 03:23:50 test-preload-235435 kubelet[1181]: E1216 03:23:50.645762    1181 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-66bc5c9577-79bfz" podUID="6218d547-f235-4b04-9843-bc5cde2cbca4"
	Dec 16 03:23:51 test-preload-235435 kubelet[1181]: E1216 03:23:51.202803    1181 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 16 03:23:51 test-preload-235435 kubelet[1181]: E1216 03:23:51.202944    1181 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6218d547-f235-4b04-9843-bc5cde2cbca4-config-volume podName:6218d547-f235-4b04-9843-bc5cde2cbca4 nodeName:}" failed. No retries permitted until 2025-12-16 03:23:55.20292636 +0000 UTC m=+12.730715403 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6218d547-f235-4b04-9843-bc5cde2cbca4-config-volume") pod "coredns-66bc5c9577-79bfz" (UID: "6218d547-f235-4b04-9843-bc5cde2cbca4") : object "kube-system"/"coredns" not registered
	Dec 16 03:23:52 test-preload-235435 kubelet[1181]: E1216 03:23:52.636290    1181 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765855432635943737 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 16 03:23:52 test-preload-235435 kubelet[1181]: E1216 03:23:52.636311    1181 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765855432635943737 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 16 03:24:02 test-preload-235435 kubelet[1181]: E1216 03:24:02.638070    1181 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765855442637749311 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	Dec 16 03:24:02 test-preload-235435 kubelet[1181]: E1216 03:24:02.638094    1181 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765855442637749311 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:132143} inodes_used:{value:55}}"
	
	
	==> storage-provisioner [0989e85a7e7da5dda8f2e806f4825d6171fc9f42bc4cdf09c06b5e064bc80b19] <==
	I1216 03:23:48.234592       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-235435 -n test-preload-235435
helpers_test.go:270: (dbg) Run:  kubectl --context test-preload-235435 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:176: Cleaning up "test-preload-235435" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-235435
--- FAIL: TestPreload (144.68s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (59.23s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-127368 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-127368 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.408470409s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-127368] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-127368" primary control-plane node in "pause-127368" cluster
	* Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-127368" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:31:40.371445   43267 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:31:40.371563   43267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:31:40.371575   43267 out.go:374] Setting ErrFile to fd 2...
	I1216 03:31:40.371579   43267 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:31:40.371798   43267 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 03:31:40.372240   43267 out.go:368] Setting JSON to false
	I1216 03:31:40.373180   43267 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4445,"bootTime":1765851455,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:31:40.373255   43267 start.go:143] virtualization: kvm guest
	I1216 03:31:40.375277   43267 out.go:179] * [pause-127368] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:31:40.376666   43267 notify.go:221] Checking for updates...
	I1216 03:31:40.376676   43267 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:31:40.378105   43267 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:31:40.379539   43267 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 03:31:40.380807   43267 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 03:31:40.382045   43267 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:31:40.383428   43267 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:31:40.385084   43267 config.go:182] Loaded profile config "pause-127368": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:31:40.385588   43267 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:31:40.420410   43267 out.go:179] * Using the kvm2 driver based on existing profile
	I1216 03:31:40.421530   43267 start.go:309] selected driver: kvm2
	I1216 03:31:40.421545   43267 start.go:927] validating driver "kvm2" against &{Name:pause-127368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.2 ClusterName:pause-127368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.23 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:31:40.421656   43267 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:31:40.422549   43267 cni.go:84] Creating CNI manager for ""
	I1216 03:31:40.422605   43267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 03:31:40.422644   43267 start.go:353] cluster config:
	{Name:pause-127368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-127368 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.23 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false p
ortainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:31:40.422745   43267 iso.go:125] acquiring lock: {Name:mk055aa36b1051bc664b283a8a6fb2af4db94c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:31:40.424085   43267 out.go:179] * Starting "pause-127368" primary control-plane node in "pause-127368" cluster
	I1216 03:31:40.425144   43267 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:31:40.425170   43267 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 03:31:40.425176   43267 cache.go:65] Caching tarball of preloaded images
	I1216 03:31:40.425261   43267 preload.go:238] Found /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 03:31:40.425271   43267 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 03:31:40.425366   43267 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/pause-127368/config.json ...
	I1216 03:31:40.425548   43267 start.go:360] acquireMachinesLock for pause-127368: {Name:mk6501572e7fc03699ef9d932e34f995d8ad6f98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:31:44.013264   43267 start.go:364] duration metric: took 3.587661083s to acquireMachinesLock for "pause-127368"
	I1216 03:31:44.013317   43267 start.go:96] Skipping create...Using existing machine configuration
	I1216 03:31:44.013325   43267 fix.go:54] fixHost starting: 
	I1216 03:31:44.015884   43267 fix.go:112] recreateIfNeeded on pause-127368: state=Running err=<nil>
	W1216 03:31:44.015911   43267 fix.go:138] unexpected machine state, will restart: <nil>
	I1216 03:31:44.017987   43267 out.go:252] * Updating the running kvm2 "pause-127368" VM ...
	I1216 03:31:44.018039   43267 machine.go:94] provisionDockerMachine start ...
	I1216 03:31:44.022175   43267 main.go:143] libmachine: domain pause-127368 has defined MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:44.022704   43267 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e2:d9:06", ip: ""} in network mk-pause-127368: {Iface:virbr5 ExpiryTime:2025-12-16 04:30:40 +0000 UTC Type:0 Mac:52:54:00:e2:d9:06 Iaid: IPaddr:192.168.83.23 Prefix:24 Hostname:pause-127368 Clientid:01:52:54:00:e2:d9:06}
	I1216 03:31:44.022732   43267 main.go:143] libmachine: domain pause-127368 has defined IP address 192.168.83.23 and MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:44.023041   43267 main.go:143] libmachine: Using SSH client type: native
	I1216 03:31:44.023286   43267 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.23 22 <nil> <nil>}
	I1216 03:31:44.023303   43267 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:31:44.151115   43267 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-127368
	
	I1216 03:31:44.151144   43267 buildroot.go:166] provisioning hostname "pause-127368"
	I1216 03:31:44.154563   43267 main.go:143] libmachine: domain pause-127368 has defined MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:44.155083   43267 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e2:d9:06", ip: ""} in network mk-pause-127368: {Iface:virbr5 ExpiryTime:2025-12-16 04:30:40 +0000 UTC Type:0 Mac:52:54:00:e2:d9:06 Iaid: IPaddr:192.168.83.23 Prefix:24 Hostname:pause-127368 Clientid:01:52:54:00:e2:d9:06}
	I1216 03:31:44.155116   43267 main.go:143] libmachine: domain pause-127368 has defined IP address 192.168.83.23 and MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:44.155306   43267 main.go:143] libmachine: Using SSH client type: native
	I1216 03:31:44.155596   43267 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.23 22 <nil> <nil>}
	I1216 03:31:44.155615   43267 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-127368 && echo "pause-127368" | sudo tee /etc/hostname
	I1216 03:31:44.300805   43267 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-127368
	
	I1216 03:31:44.304424   43267 main.go:143] libmachine: domain pause-127368 has defined MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:44.304966   43267 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e2:d9:06", ip: ""} in network mk-pause-127368: {Iface:virbr5 ExpiryTime:2025-12-16 04:30:40 +0000 UTC Type:0 Mac:52:54:00:e2:d9:06 Iaid: IPaddr:192.168.83.23 Prefix:24 Hostname:pause-127368 Clientid:01:52:54:00:e2:d9:06}
	I1216 03:31:44.305022   43267 main.go:143] libmachine: domain pause-127368 has defined IP address 192.168.83.23 and MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:44.305258   43267 main.go:143] libmachine: Using SSH client type: native
	I1216 03:31:44.305526   43267 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.23 22 <nil> <nil>}
	I1216 03:31:44.305550   43267 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-127368' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-127368/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-127368' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:31:44.434120   43267 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:31:44.434165   43267 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5036/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5036/.minikube}
	I1216 03:31:44.434197   43267 buildroot.go:174] setting up certificates
	I1216 03:31:44.434209   43267 provision.go:84] configureAuth start
	I1216 03:31:44.437489   43267 main.go:143] libmachine: domain pause-127368 has defined MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:44.437989   43267 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e2:d9:06", ip: ""} in network mk-pause-127368: {Iface:virbr5 ExpiryTime:2025-12-16 04:30:40 +0000 UTC Type:0 Mac:52:54:00:e2:d9:06 Iaid: IPaddr:192.168.83.23 Prefix:24 Hostname:pause-127368 Clientid:01:52:54:00:e2:d9:06}
	I1216 03:31:44.438021   43267 main.go:143] libmachine: domain pause-127368 has defined IP address 192.168.83.23 and MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:44.441131   43267 main.go:143] libmachine: domain pause-127368 has defined MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:44.441593   43267 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e2:d9:06", ip: ""} in network mk-pause-127368: {Iface:virbr5 ExpiryTime:2025-12-16 04:30:40 +0000 UTC Type:0 Mac:52:54:00:e2:d9:06 Iaid: IPaddr:192.168.83.23 Prefix:24 Hostname:pause-127368 Clientid:01:52:54:00:e2:d9:06}
	I1216 03:31:44.441623   43267 main.go:143] libmachine: domain pause-127368 has defined IP address 192.168.83.23 and MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:44.441778   43267 provision.go:143] copyHostCerts
	I1216 03:31:44.441849   43267 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5036/.minikube/cert.pem, removing ...
	I1216 03:31:44.441866   43267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5036/.minikube/cert.pem
	I1216 03:31:44.441956   43267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/cert.pem (1123 bytes)
	I1216 03:31:44.442080   43267 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5036/.minikube/key.pem, removing ...
	I1216 03:31:44.442093   43267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5036/.minikube/key.pem
	I1216 03:31:44.442128   43267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/key.pem (1679 bytes)
	I1216 03:31:44.442208   43267 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5036/.minikube/ca.pem, removing ...
	I1216 03:31:44.442219   43267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5036/.minikube/ca.pem
	I1216 03:31:44.442250   43267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/ca.pem (1078 bytes)
	I1216 03:31:44.442317   43267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem org=jenkins.pause-127368 san=[127.0.0.1 192.168.83.23 localhost minikube pause-127368]
	I1216 03:31:44.475445   43267 provision.go:177] copyRemoteCerts
	I1216 03:31:44.475507   43267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:31:44.478727   43267 main.go:143] libmachine: domain pause-127368 has defined MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:44.479321   43267 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e2:d9:06", ip: ""} in network mk-pause-127368: {Iface:virbr5 ExpiryTime:2025-12-16 04:30:40 +0000 UTC Type:0 Mac:52:54:00:e2:d9:06 Iaid: IPaddr:192.168.83.23 Prefix:24 Hostname:pause-127368 Clientid:01:52:54:00:e2:d9:06}
	I1216 03:31:44.479359   43267 main.go:143] libmachine: domain pause-127368 has defined IP address 192.168.83.23 and MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:44.479551   43267 sshutil.go:53] new ssh client: &{IP:192.168.83.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/pause-127368/id_rsa Username:docker}
	I1216 03:31:44.575613   43267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 03:31:44.615110   43267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:31:44.648227   43267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1216 03:31:44.688899   43267 provision.go:87] duration metric: took 254.677094ms to configureAuth
	I1216 03:31:44.688941   43267 buildroot.go:189] setting minikube options for container-runtime
	I1216 03:31:44.689163   43267 config.go:182] Loaded profile config "pause-127368": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:31:44.692184   43267 main.go:143] libmachine: domain pause-127368 has defined MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:44.692575   43267 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e2:d9:06", ip: ""} in network mk-pause-127368: {Iface:virbr5 ExpiryTime:2025-12-16 04:30:40 +0000 UTC Type:0 Mac:52:54:00:e2:d9:06 Iaid: IPaddr:192.168.83.23 Prefix:24 Hostname:pause-127368 Clientid:01:52:54:00:e2:d9:06}
	I1216 03:31:44.692610   43267 main.go:143] libmachine: domain pause-127368 has defined IP address 192.168.83.23 and MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:44.692794   43267 main.go:143] libmachine: Using SSH client type: native
	I1216 03:31:44.693023   43267 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.23 22 <nil> <nil>}
	I1216 03:31:44.693045   43267 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:31:50.277273   43267 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:31:50.277307   43267 machine.go:97] duration metric: took 6.25925305s to provisionDockerMachine
	I1216 03:31:50.277322   43267 start.go:293] postStartSetup for "pause-127368" (driver="kvm2")
	I1216 03:31:50.277334   43267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:31:50.277429   43267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:31:50.280285   43267 main.go:143] libmachine: domain pause-127368 has defined MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:50.280652   43267 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e2:d9:06", ip: ""} in network mk-pause-127368: {Iface:virbr5 ExpiryTime:2025-12-16 04:30:40 +0000 UTC Type:0 Mac:52:54:00:e2:d9:06 Iaid: IPaddr:192.168.83.23 Prefix:24 Hostname:pause-127368 Clientid:01:52:54:00:e2:d9:06}
	I1216 03:31:50.280673   43267 main.go:143] libmachine: domain pause-127368 has defined IP address 192.168.83.23 and MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:50.280813   43267 sshutil.go:53] new ssh client: &{IP:192.168.83.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/pause-127368/id_rsa Username:docker}
	I1216 03:31:50.369305   43267 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:31:50.374192   43267 info.go:137] Remote host: Buildroot 2025.02
	I1216 03:31:50.374218   43267 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5036/.minikube/addons for local assets ...
	I1216 03:31:50.374348   43267 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5036/.minikube/files for local assets ...
	I1216 03:31:50.374460   43267 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/ssl/certs/89742.pem -> 89742.pem in /etc/ssl/certs
	I1216 03:31:50.374554   43267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:31:50.385217   43267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/ssl/certs/89742.pem --> /etc/ssl/certs/89742.pem (1708 bytes)
	I1216 03:31:50.412381   43267 start.go:296] duration metric: took 135.044475ms for postStartSetup
	I1216 03:31:50.412422   43267 fix.go:56] duration metric: took 6.399097519s for fixHost
	I1216 03:31:50.415175   43267 main.go:143] libmachine: domain pause-127368 has defined MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:50.415661   43267 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e2:d9:06", ip: ""} in network mk-pause-127368: {Iface:virbr5 ExpiryTime:2025-12-16 04:30:40 +0000 UTC Type:0 Mac:52:54:00:e2:d9:06 Iaid: IPaddr:192.168.83.23 Prefix:24 Hostname:pause-127368 Clientid:01:52:54:00:e2:d9:06}
	I1216 03:31:50.415684   43267 main.go:143] libmachine: domain pause-127368 has defined IP address 192.168.83.23 and MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:50.415854   43267 main.go:143] libmachine: Using SSH client type: native
	I1216 03:31:50.416056   43267 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.83.23 22 <nil> <nil>}
	I1216 03:31:50.416066   43267 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1216 03:31:50.534781   43267 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765855910.528475536
	
	I1216 03:31:50.534802   43267 fix.go:216] guest clock: 1765855910.528475536
	I1216 03:31:50.534811   43267 fix.go:229] Guest: 2025-12-16 03:31:50.528475536 +0000 UTC Remote: 2025-12-16 03:31:50.412428551 +0000 UTC m=+10.090119943 (delta=116.046985ms)
	I1216 03:31:50.534833   43267 fix.go:200] guest clock delta is within tolerance: 116.046985ms
	I1216 03:31:50.534839   43267 start.go:83] releasing machines lock for "pause-127368", held for 6.521544518s
	I1216 03:31:50.538162   43267 main.go:143] libmachine: domain pause-127368 has defined MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:50.538626   43267 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e2:d9:06", ip: ""} in network mk-pause-127368: {Iface:virbr5 ExpiryTime:2025-12-16 04:30:40 +0000 UTC Type:0 Mac:52:54:00:e2:d9:06 Iaid: IPaddr:192.168.83.23 Prefix:24 Hostname:pause-127368 Clientid:01:52:54:00:e2:d9:06}
	I1216 03:31:50.538656   43267 main.go:143] libmachine: domain pause-127368 has defined IP address 192.168.83.23 and MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:50.539178   43267 ssh_runner.go:195] Run: cat /version.json
	I1216 03:31:50.539215   43267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:31:50.542282   43267 main.go:143] libmachine: domain pause-127368 has defined MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:50.542666   43267 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e2:d9:06", ip: ""} in network mk-pause-127368: {Iface:virbr5 ExpiryTime:2025-12-16 04:30:40 +0000 UTC Type:0 Mac:52:54:00:e2:d9:06 Iaid: IPaddr:192.168.83.23 Prefix:24 Hostname:pause-127368 Clientid:01:52:54:00:e2:d9:06}
	I1216 03:31:50.542738   43267 main.go:143] libmachine: domain pause-127368 has defined IP address 192.168.83.23 and MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:50.542791   43267 main.go:143] libmachine: domain pause-127368 has defined MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:50.542919   43267 sshutil.go:53] new ssh client: &{IP:192.168.83.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/pause-127368/id_rsa Username:docker}
	I1216 03:31:50.543315   43267 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e2:d9:06", ip: ""} in network mk-pause-127368: {Iface:virbr5 ExpiryTime:2025-12-16 04:30:40 +0000 UTC Type:0 Mac:52:54:00:e2:d9:06 Iaid: IPaddr:192.168.83.23 Prefix:24 Hostname:pause-127368 Clientid:01:52:54:00:e2:d9:06}
	I1216 03:31:50.543350   43267 main.go:143] libmachine: domain pause-127368 has defined IP address 192.168.83.23 and MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:50.543547   43267 sshutil.go:53] new ssh client: &{IP:192.168.83.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/pause-127368/id_rsa Username:docker}
	I1216 03:31:50.626676   43267 ssh_runner.go:195] Run: systemctl --version
	I1216 03:31:50.662966   43267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:31:50.827221   43267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:31:50.836186   43267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:31:50.836254   43267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:31:50.847828   43267 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1216 03:31:50.847854   43267 start.go:496] detecting cgroup driver to use...
	I1216 03:31:50.847918   43267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:31:50.869777   43267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:31:50.889009   43267 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:31:50.889066   43267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:31:50.910124   43267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:31:50.926091   43267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:31:51.112603   43267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:31:51.309419   43267 docker.go:234] disabling docker service ...
	I1216 03:31:51.309490   43267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:31:51.345919   43267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:31:51.367160   43267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 03:31:51.566791   43267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:31:51.782756   43267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:31:51.802021   43267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:31:51.826218   43267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 03:31:51.826284   43267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:31:51.838440   43267 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 03:31:51.838498   43267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:31:51.850872   43267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:31:51.863613   43267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:31:51.878890   43267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:31:51.893144   43267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:31:51.905440   43267 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:31:51.918677   43267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:31:51.932954   43267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:31:51.943272   43267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:31:51.957460   43267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:31:52.150032   43267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 03:31:52.741335   43267 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:31:52.741420   43267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:31:52.747346   43267 start.go:564] Will wait 60s for crictl version
	I1216 03:31:52.747424   43267 ssh_runner.go:195] Run: which crictl
	I1216 03:31:52.752238   43267 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 03:31:52.787529   43267 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 03:31:52.787644   43267 ssh_runner.go:195] Run: crio --version
	I1216 03:31:52.850487   43267 ssh_runner.go:195] Run: crio --version
	I1216 03:31:52.915046   43267 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1216 03:31:52.919405   43267 main.go:143] libmachine: domain pause-127368 has defined MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:52.919899   43267 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e2:d9:06", ip: ""} in network mk-pause-127368: {Iface:virbr5 ExpiryTime:2025-12-16 04:30:40 +0000 UTC Type:0 Mac:52:54:00:e2:d9:06 Iaid: IPaddr:192.168.83.23 Prefix:24 Hostname:pause-127368 Clientid:01:52:54:00:e2:d9:06}
	I1216 03:31:52.919941   43267 main.go:143] libmachine: domain pause-127368 has defined IP address 192.168.83.23 and MAC address 52:54:00:e2:d9:06 in network mk-pause-127368
	I1216 03:31:52.920164   43267 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1216 03:31:52.930427   43267 kubeadm.go:884] updating cluster {Name:pause-127368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:pause-127368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.23 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidi
a-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:31:52.930625   43267 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:31:52.930694   43267 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:31:53.086069   43267 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:31:53.086099   43267 crio.go:433] Images already preloaded, skipping extraction
	I1216 03:31:53.086157   43267 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:31:53.176230   43267 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:31:53.176260   43267 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:31:53.176271   43267 kubeadm.go:935] updating node { 192.168.83.23 8443 v1.34.2 crio true true} ...
	I1216 03:31:53.176427   43267 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-127368 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-127368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 03:31:53.176523   43267 ssh_runner.go:195] Run: crio config
	I1216 03:31:53.342654   43267 cni.go:84] Creating CNI manager for ""
	I1216 03:31:53.342683   43267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 03:31:53.342704   43267 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:31:53.342735   43267 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.23 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-127368 NodeName:pause-127368 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:31:53.342937   43267 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-127368"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.23"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.23"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:31:53.343021   43267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 03:31:53.368909   43267 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:31:53.369017   43267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:31:53.397386   43267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1216 03:31:53.449590   43267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 03:31:53.513664   43267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1216 03:31:53.601787   43267 ssh_runner.go:195] Run: grep 192.168.83.23	control-plane.minikube.internal$ /etc/hosts
	I1216 03:31:53.614646   43267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:31:54.005561   43267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:31:54.069504   43267 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/pause-127368 for IP: 192.168.83.23
	I1216 03:31:54.069526   43267 certs.go:195] generating shared ca certs ...
	I1216 03:31:54.069544   43267 certs.go:227] acquiring lock for ca certs: {Name:mk77e952ddad6d1f2b7d1d07b6d50cdef35b56ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:31:54.069726   43267 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5036/.minikube/ca.key
	I1216 03:31:54.069778   43267 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.key
	I1216 03:31:54.069790   43267 certs.go:257] generating profile certs ...
	I1216 03:31:54.069900   43267 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/pause-127368/client.key
	I1216 03:31:54.070001   43267 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/pause-127368/apiserver.key.e4d3abec
	I1216 03:31:54.070057   43267 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/pause-127368/proxy-client.key
	I1216 03:31:54.070203   43267 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/8974.pem (1338 bytes)
	W1216 03:31:54.070245   43267 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5036/.minikube/certs/8974_empty.pem, impossibly tiny 0 bytes
	I1216 03:31:54.070258   43267 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:31:54.070291   43267 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:31:54.070339   43267 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:31:54.070371   43267 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem (1679 bytes)
	I1216 03:31:54.071036   43267 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/ssl/certs/89742.pem (1708 bytes)
	I1216 03:31:54.071970   43267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:31:54.134012   43267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:31:54.192918   43267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:31:54.239330   43267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:31:54.294435   43267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/pause-127368/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 03:31:54.355594   43267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/pause-127368/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 03:31:54.417471   43267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/pause-127368/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:31:54.489717   43267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/pause-127368/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 03:31:54.596716   43267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/certs/8974.pem --> /usr/share/ca-certificates/8974.pem (1338 bytes)
	I1216 03:31:54.672077   43267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/ssl/certs/89742.pem --> /usr/share/ca-certificates/89742.pem (1708 bytes)
	I1216 03:31:54.730718   43267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:31:54.775814   43267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:31:54.814440   43267 ssh_runner.go:195] Run: openssl version
	I1216 03:31:54.845738   43267 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8974.pem
	I1216 03:31:54.886202   43267 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8974.pem /etc/ssl/certs/8974.pem
	I1216 03:31:54.931423   43267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8974.pem
	I1216 03:31:54.945880   43267 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:36 /usr/share/ca-certificates/8974.pem
	I1216 03:31:54.945979   43267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8974.pem
	I1216 03:31:54.961570   43267 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:31:54.988453   43267 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89742.pem
	I1216 03:31:55.013586   43267 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89742.pem /etc/ssl/certs/89742.pem
	I1216 03:31:55.040335   43267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89742.pem
	I1216 03:31:55.064192   43267 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:36 /usr/share/ca-certificates/89742.pem
	I1216 03:31:55.064262   43267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89742.pem
	I1216 03:31:55.092220   43267 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:31:55.130232   43267 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:31:55.157236   43267 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:31:55.178888   43267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:31:55.189127   43267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:31:55.189207   43267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:31:55.201523   43267 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:31:55.222538   43267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 03:31:55.231722   43267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1216 03:31:55.245195   43267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1216 03:31:55.258748   43267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1216 03:31:55.268887   43267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1216 03:31:55.282858   43267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1216 03:31:55.296822   43267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1216 03:31:55.308993   43267 kubeadm.go:401] StartCluster: {Name:pause-127368 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 Cl
usterName:pause-127368 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.23 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-g
pu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:31:55.309149   43267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:31:55.309217   43267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:31:55.371941   43267 cri.go:89] found id: "c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7"
	I1216 03:31:55.371965   43267 cri.go:89] found id: "7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48"
	I1216 03:31:55.371971   43267 cri.go:89] found id: "1603771fc3bb957264f0e0f69af13f99d97b23a395e946185f6f8606e58076e3"
	I1216 03:31:55.371976   43267 cri.go:89] found id: "1e1943eab5540b2524451fd9149e5bdb48d529525992e4427f98907806a8120c"
	I1216 03:31:55.371980   43267 cri.go:89] found id: "6af7add7fe9690f182b65bb9686b7a53b05509c0da8b8a4bc1586130f494913d"
	I1216 03:31:55.371985   43267 cri.go:89] found id: "b8b867c1bdad08e7d4b11632415fa3ce077d2fbbc7d269b68165673c2c8ac4e0"
	I1216 03:31:55.371989   43267 cri.go:89] found id: "516464b244c242de261bac3b8cbd3e0bbc298412c90d2700a0cf9253021faa00"
	I1216 03:31:55.371994   43267 cri.go:89] found id: "a8bc982e97375733c6a6884402ec35e3c9d903a482fa1c0cec72a4d3d95e8461"
	I1216 03:31:55.371998   43267 cri.go:89] found id: "2e96f0cb1410c8109bf609900229a88bc8162f92f8318a2e7cbf083b31cd0050"
	I1216 03:31:55.372007   43267 cri.go:89] found id: "5625f27f367a7d7555860919ccfc373315df2bc1a1c3689aed6a359f22d5b62d"
	I1216 03:31:55.372011   43267 cri.go:89] found id: "25cafb4681eab4cf7f0278530b5be09e38e3155ff5120fbadabb938d0b14882e"
	I1216 03:31:55.372016   43267 cri.go:89] found id: "4a2bb8ba97dd0b3e5c3aa3b73fbbffd8d773e5fdd2227b6986d6e3c38cea3f16"
	I1216 03:31:55.372020   43267 cri.go:89] found id: ""
	I1216 03:31:55.372069   43267 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-127368 -n pause-127368
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-127368 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-127368 logs -n 25: (1.304891688s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-079027 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                        │ cilium-079027             │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │                     │
	│ ssh     │ -p cilium-079027 sudo cat /etc/containerd/config.toml                                                                                                                                                                   │ cilium-079027             │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │                     │
	│ ssh     │ -p cilium-079027 sudo containerd config dump                                                                                                                                                                            │ cilium-079027             │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │                     │
	│ ssh     │ -p cilium-079027 sudo systemctl status crio --all --full --no-pager                                                                                                                                                     │ cilium-079027             │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │                     │
	│ ssh     │ -p cilium-079027 sudo systemctl cat crio --no-pager                                                                                                                                                                     │ cilium-079027             │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │                     │
	│ ssh     │ -p cilium-079027 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                           │ cilium-079027             │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │                     │
	│ ssh     │ -p cilium-079027 sudo crio config                                                                                                                                                                                       │ cilium-079027             │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │                     │
	│ delete  │ -p cilium-079027                                                                                                                                                                                                        │ cilium-079027             │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │ 16 Dec 25 03:29 UTC │
	│ start   │ -p guest-064510 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                 │ guest-064510              │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │ 16 Dec 25 03:29 UTC │
	│ delete  │ -p force-systemd-env-050892                                                                                                                                                                                             │ force-systemd-env-050892  │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │ 16 Dec 25 03:29 UTC │
	│ start   │ -p cert-expiration-121062 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                    │ cert-expiration-121062    │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │ 16 Dec 25 03:30 UTC │
	│ delete  │ -p kubernetes-upgrade-352947                                                                                                                                                                                            │ kubernetes-upgrade-352947 │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │ 16 Dec 25 03:29 UTC │
	│ start   │ -p force-systemd-flag-103596 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                               │ force-systemd-flag-103596 │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │ 16 Dec 25 03:30 UTC │
	│ start   │ -p pause-127368 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-127368              │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │ 16 Dec 25 03:31 UTC │
	│ ssh     │ force-systemd-flag-103596 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                    │ force-systemd-flag-103596 │ jenkins │ v1.37.0 │ 16 Dec 25 03:30 UTC │ 16 Dec 25 03:30 UTC │
	│ delete  │ -p force-systemd-flag-103596                                                                                                                                                                                            │ force-systemd-flag-103596 │ jenkins │ v1.37.0 │ 16 Dec 25 03:30 UTC │ 16 Dec 25 03:30 UTC │
	│ start   │ -p cert-options-972236 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-972236       │ jenkins │ v1.37.0 │ 16 Dec 25 03:30 UTC │ 16 Dec 25 03:31 UTC │
	│ ssh     │ cert-options-972236 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-972236       │ jenkins │ v1.37.0 │ 16 Dec 25 03:31 UTC │ 16 Dec 25 03:31 UTC │
	│ ssh     │ -p cert-options-972236 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-972236       │ jenkins │ v1.37.0 │ 16 Dec 25 03:31 UTC │ 16 Dec 25 03:31 UTC │
	│ delete  │ -p cert-options-972236                                                                                                                                                                                                  │ cert-options-972236       │ jenkins │ v1.37.0 │ 16 Dec 25 03:31 UTC │ 16 Dec 25 03:31 UTC │
	│ start   │ -p auto-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                                                                                   │ auto-079027               │ jenkins │ v1.37.0 │ 16 Dec 25 03:31 UTC │                     │
	│ start   │ -p pause-127368 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-127368              │ jenkins │ v1.37.0 │ 16 Dec 25 03:31 UTC │ 16 Dec 25 03:32 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-418673 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ running-upgrade-418673    │ jenkins │ v1.37.0 │ 16 Dec 25 03:31 UTC │                     │
	│ delete  │ -p running-upgrade-418673                                                                                                                                                                                               │ running-upgrade-418673    │ jenkins │ v1.37.0 │ 16 Dec 25 03:32 UTC │ 16 Dec 25 03:32 UTC │
	│ start   │ -p kindnet-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                                                                                  │ kindnet-079027            │ jenkins │ v1.37.0 │ 16 Dec 25 03:32 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:32:01
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:32:01.335023   43455 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:32:01.335276   43455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:01.335284   43455 out.go:374] Setting ErrFile to fd 2...
	I1216 03:32:01.335288   43455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:01.335565   43455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 03:32:01.336828   43455 out.go:368] Setting JSON to false
	I1216 03:32:01.337738   43455 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4466,"bootTime":1765851455,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:32:01.337791   43455 start.go:143] virtualization: kvm guest
	I1216 03:32:01.339938   43455 out.go:179] * [kindnet-079027] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:32:01.341275   43455 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:32:01.341271   43455 notify.go:221] Checking for updates...
	I1216 03:32:01.343546   43455 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:32:01.344752   43455 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 03:32:01.345963   43455 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 03:32:01.347185   43455 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:32:01.348360   43455 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:32:01.350264   43455 config.go:182] Loaded profile config "auto-079027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:32:01.350378   43455 config.go:182] Loaded profile config "cert-expiration-121062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:32:01.350482   43455 config.go:182] Loaded profile config "guest-064510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1216 03:32:01.350698   43455 config.go:182] Loaded profile config "pause-127368": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:32:01.350849   43455 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:32:01.399437   43455 out.go:179] * Using the kvm2 driver based on user configuration
	I1216 03:32:01.400662   43455 start.go:309] selected driver: kvm2
	I1216 03:32:01.400700   43455 start.go:927] validating driver "kvm2" against <nil>
	I1216 03:32:01.400714   43455 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:32:01.401703   43455 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 03:32:01.402041   43455 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:32:01.402072   43455 cni.go:84] Creating CNI manager for "kindnet"
	I1216 03:32:01.402080   43455 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 03:32:01.402118   43455 start.go:353] cluster config:
	{Name:kindnet-079027 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-079027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:32:01.402230   43455 iso.go:125] acquiring lock: {Name:mk055aa36b1051bc664b283a8a6fb2af4db94c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:32:01.403464   43455 out.go:179] * Starting "kindnet-079027" primary control-plane node in "kindnet-079027" cluster
	I1216 03:32:01.404395   43455 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:32:01.404442   43455 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 03:32:01.404456   43455 cache.go:65] Caching tarball of preloaded images
	I1216 03:32:01.404534   43455 preload.go:238] Found /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 03:32:01.404547   43455 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 03:32:01.404652   43455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/config.json ...
	I1216 03:32:01.404677   43455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/config.json: {Name:mk29468448342ae4c959d22444e4b1b6618e5c5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:01.404829   43455 start.go:360] acquireMachinesLock for kindnet-079027: {Name:mk6501572e7fc03699ef9d932e34f995d8ad6f98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:32:01.404870   43455 start.go:364] duration metric: took 25.209µs to acquireMachinesLock for "kindnet-079027"
	I1216 03:32:01.404892   43455 start.go:93] Provisioning new machine with config: &{Name:kindnet-079027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.2 ClusterName:kindnet-079027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:32:01.404995   43455 start.go:125] createHost starting for "" (driver="kvm2")
	I1216 03:32:03.315984   43066 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 03:32:03.316052   43066 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:32:03.316154   43066 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:32:03.316291   43066 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:32:03.316447   43066 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:32:03.316564   43066 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:32:03.317849   43066 out.go:252]   - Generating certificates and keys ...
	I1216 03:32:03.317963   43066 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:32:03.318059   43066 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:32:03.318154   43066 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:32:03.318270   43066 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:32:03.318365   43066 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:32:03.318458   43066 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:32:03.318536   43066 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:32:03.318723   43066 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-079027 localhost] and IPs [192.168.50.67 127.0.0.1 ::1]
	I1216 03:32:03.318815   43066 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:32:03.318951   43066 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-079027 localhost] and IPs [192.168.50.67 127.0.0.1 ::1]
	I1216 03:32:03.319013   43066 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:32:03.319070   43066 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:32:03.319109   43066 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:32:03.319157   43066 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:32:03.319205   43066 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:32:03.319257   43066 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:32:03.319322   43066 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:32:03.319420   43066 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:32:03.319505   43066 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:32:03.319645   43066 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:32:03.319736   43066 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:32:03.321148   43066 out.go:252]   - Booting up control plane ...
	I1216 03:32:03.321238   43066 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:32:03.321303   43066 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:32:03.321360   43066 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:32:03.321442   43066 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:32:03.321522   43066 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:32:03.321621   43066 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:32:03.321713   43066 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:32:03.321799   43066 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:32:03.322012   43066 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:32:03.322190   43066 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:32:03.322300   43066 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002334799s
	I1216 03:32:03.322427   43066 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 03:32:03.322534   43066 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.50.67:8443/livez
	I1216 03:32:03.322656   43066 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 03:32:03.322763   43066 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 03:32:03.322886   43066 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.889705921s
	I1216 03:32:03.323024   43066 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.367765911s
	I1216 03:32:03.323129   43066 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.005516381s
	I1216 03:32:03.323265   43066 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:32:03.323414   43066 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:32:03.323514   43066 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:32:03.323779   43066 kubeadm.go:319] [mark-control-plane] Marking the node auto-079027 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:32:03.323876   43066 kubeadm.go:319] [bootstrap-token] Using token: vhq760.ftqshaumwpqec4fg
	I1216 03:32:03.325135   43066 out.go:252]   - Configuring RBAC rules ...
	I1216 03:32:03.325277   43066 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:32:03.325404   43066 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:32:03.325553   43066 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:32:03.325701   43066 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:32:03.325822   43066 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:32:03.325944   43066 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:32:03.326084   43066 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:32:03.326123   43066 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:32:03.326202   43066 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:32:03.326213   43066 kubeadm.go:319] 
	I1216 03:32:03.326302   43066 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:32:03.326314   43066 kubeadm.go:319] 
	I1216 03:32:03.326421   43066 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:32:03.326429   43066 kubeadm.go:319] 
	I1216 03:32:03.326465   43066 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:32:03.326551   43066 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:32:03.326643   43066 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:32:03.326659   43066 kubeadm.go:319] 
	I1216 03:32:03.326740   43066 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:32:03.326751   43066 kubeadm.go:319] 
	I1216 03:32:03.326830   43066 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:32:03.326841   43066 kubeadm.go:319] 
	I1216 03:32:03.326911   43066 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:32:03.327040   43066 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:32:03.327151   43066 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:32:03.327161   43066 kubeadm.go:319] 
	I1216 03:32:03.327246   43066 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:32:03.327350   43066 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:32:03.327359   43066 kubeadm.go:319] 
	I1216 03:32:03.327442   43066 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token vhq760.ftqshaumwpqec4fg \
	I1216 03:32:03.327591   43066 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6d3bac17af9f836812b78bb65fe3149db071d191150485ad31b907e98cbc14f1 \
	I1216 03:32:03.327631   43066 kubeadm.go:319] 	--control-plane 
	I1216 03:32:03.327646   43066 kubeadm.go:319] 
	I1216 03:32:03.327762   43066 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:32:03.327774   43066 kubeadm.go:319] 
	I1216 03:32:03.327896   43066 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token vhq760.ftqshaumwpqec4fg \
	I1216 03:32:03.328072   43066 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6d3bac17af9f836812b78bb65fe3149db071d191150485ad31b907e98cbc14f1 
	I1216 03:32:03.328090   43066 cni.go:84] Creating CNI manager for ""
	I1216 03:32:03.328099   43066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 03:32:03.329541   43066 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 03:32:01.406423   43455 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:32:01.406605   43455 start.go:159] libmachine.API.Create for "kindnet-079027" (driver="kvm2")
	I1216 03:32:01.406636   43455 client.go:173] LocalClient.Create starting
	I1216 03:32:01.406706   43455 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem
	I1216 03:32:01.406741   43455 main.go:143] libmachine: Decoding PEM data...
	I1216 03:32:01.406765   43455 main.go:143] libmachine: Parsing certificate...
	I1216 03:32:01.406827   43455 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem
	I1216 03:32:01.406853   43455 main.go:143] libmachine: Decoding PEM data...
	I1216 03:32:01.406870   43455 main.go:143] libmachine: Parsing certificate...
	I1216 03:32:01.407150   43455 main.go:143] libmachine: creating domain...
	I1216 03:32:01.407165   43455 main.go:143] libmachine: creating network...
	I1216 03:32:01.408644   43455 main.go:143] libmachine: found existing default network
	I1216 03:32:01.408887   43455 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1216 03:32:01.409717   43455 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:70:3c:ab} reservation:<nil>}
	I1216 03:32:01.410693   43455 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f2:2b:23} reservation:<nil>}
	I1216 03:32:01.411196   43455 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:47:35:32} reservation:<nil>}
	I1216 03:32:01.412028   43455 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bc0ed0}
	I1216 03:32:01.412120   43455 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-kindnet-079027</name>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1216 03:32:01.417342   43455 main.go:143] libmachine: creating private network mk-kindnet-079027 192.168.72.0/24...
	I1216 03:32:01.490946   43455 main.go:143] libmachine: private network mk-kindnet-079027 192.168.72.0/24 created
	I1216 03:32:01.491299   43455 main.go:143] libmachine: <network>
	  <name>mk-kindnet-079027</name>
	  <uuid>8fe7437f-3676-48c0-bfd5-b979f9a3095b</uuid>
	  <bridge name='virbr4' stp='on' delay='0'/>
	  <mac address='52:54:00:b9:22:68'/>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1216 03:32:01.491329   43455 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027 ...
	I1216 03:32:01.491350   43455 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22158-5036/.minikube/cache/iso/amd64/minikube-v1.37.0-1765836331-22158-amd64.iso
	I1216 03:32:01.491359   43455 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 03:32:01.491434   43455 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22158-5036/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22158-5036/.minikube/cache/iso/amd64/minikube-v1.37.0-1765836331-22158-amd64.iso...
	I1216 03:32:01.748015   43455 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/id_rsa...
	I1216 03:32:01.864709   43455 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/kindnet-079027.rawdisk...
	I1216 03:32:01.864776   43455 main.go:143] libmachine: Writing magic tar header
	I1216 03:32:01.864811   43455 main.go:143] libmachine: Writing SSH key tar header
	I1216 03:32:01.864968   43455 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027 ...
	I1216 03:32:01.865054   43455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027
	I1216 03:32:01.865094   43455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027 (perms=drwx------)
	I1216 03:32:01.865115   43455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22158-5036/.minikube/machines
	I1216 03:32:01.865132   43455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22158-5036/.minikube/machines (perms=drwxr-xr-x)
	I1216 03:32:01.865150   43455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 03:32:01.865181   43455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22158-5036/.minikube (perms=drwxr-xr-x)
	I1216 03:32:01.865200   43455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22158-5036
	I1216 03:32:01.865214   43455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22158-5036 (perms=drwxrwxr-x)
	I1216 03:32:01.865227   43455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1216 03:32:01.865238   43455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1216 03:32:01.865251   43455 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1216 03:32:01.865269   43455 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1216 03:32:01.865287   43455 main.go:143] libmachine: checking permissions on dir: /home
	I1216 03:32:01.865300   43455 main.go:143] libmachine: skipping /home - not owner
	I1216 03:32:01.865310   43455 main.go:143] libmachine: defining domain...
	I1216 03:32:01.867056   43455 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>kindnet-079027</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/kindnet-079027.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-kindnet-079027'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1216 03:32:01.872164   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:d9:8f:3b in network default
	I1216 03:32:01.872878   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:01.872897   43455 main.go:143] libmachine: starting domain...
	I1216 03:32:01.872902   43455 main.go:143] libmachine: ensuring networks are active...
	I1216 03:32:01.873689   43455 main.go:143] libmachine: Ensuring network default is active
	I1216 03:32:01.874327   43455 main.go:143] libmachine: Ensuring network mk-kindnet-079027 is active
	I1216 03:32:01.875341   43455 main.go:143] libmachine: getting domain XML...
	I1216 03:32:01.876826   43455 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>kindnet-079027</name>
	  <uuid>1d2bf1bb-66d0-4601-995b-378d47476890</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/kindnet-079027.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:0f:e2:b0'/>
	      <source network='mk-kindnet-079027'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:d9:8f:3b'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1216 03:32:03.270886   43455 main.go:143] libmachine: waiting for domain to start...
	I1216 03:32:03.272261   43455 main.go:143] libmachine: domain is now running
	I1216 03:32:03.272281   43455 main.go:143] libmachine: waiting for IP...
	I1216 03:32:03.273327   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:03.274030   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:03.274044   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:03.274361   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:03.274402   43455 retry.go:31] will retry after 224.720932ms: waiting for domain to come up
	I1216 03:32:03.501103   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:03.501956   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:03.501976   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:03.502456   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:03.502495   43455 retry.go:31] will retry after 327.206572ms: waiting for domain to come up
	I1216 03:32:03.831106   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:03.831903   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:03.831932   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:03.832266   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:03.832300   43455 retry.go:31] will retry after 386.458842ms: waiting for domain to come up
	I1216 03:32:04.219723   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:04.220375   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:04.220395   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:04.220713   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:04.220750   43455 retry.go:31] will retry after 398.825546ms: waiting for domain to come up
	I1216 03:32:04.621120   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:04.621970   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:04.621989   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:04.622346   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:04.622394   43455 retry.go:31] will retry after 708.753951ms: waiting for domain to come up
	I1216 03:32:05.333424   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:05.334192   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:05.334213   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:05.334579   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:05.334621   43455 retry.go:31] will retry after 707.904265ms: waiting for domain to come up
	I1216 03:32:06.044388   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:06.044964   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:06.044988   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:06.045398   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:06.045439   43455 retry.go:31] will retry after 1.00904731s: waiting for domain to come up
	I1216 03:32:03.330591   43066 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 03:32:03.344643   43066 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 03:32:03.368661   43066 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:32:03.368806   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:03.368813   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-079027 minikube.k8s.io/updated_at=2025_12_16T03_32_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=auto-079027 minikube.k8s.io/primary=true
	I1216 03:32:03.415136   43066 ops.go:34] apiserver oom_adj: -16
	I1216 03:32:03.512124   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:04.013140   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:04.512547   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:05.012990   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:05.512794   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:06.013050   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:06.512419   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:07.012942   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:07.512308   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:07.620329   43066 kubeadm.go:1114] duration metric: took 4.251590465s to wait for elevateKubeSystemPrivileges
	I1216 03:32:07.620377   43066 kubeadm.go:403] duration metric: took 17.790594517s to StartCluster
	I1216 03:32:07.620401   43066 settings.go:142] acquiring lock: {Name:mk546ecdfe1860ae68a814905b53e6453298b4fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:07.620491   43066 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 03:32:07.621836   43066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/kubeconfig: {Name:mk6832d71ef0ad581fa898dceefc2fcc2fd665b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:07.622075   43066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:32:07.622079   43066 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.50.67 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:32:07.622176   43066 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:32:07.622264   43066 addons.go:70] Setting storage-provisioner=true in profile "auto-079027"
	I1216 03:32:07.622285   43066 addons.go:239] Setting addon storage-provisioner=true in "auto-079027"
	I1216 03:32:07.622293   43066 config.go:182] Loaded profile config "auto-079027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:32:07.622318   43066 addons.go:70] Setting default-storageclass=true in profile "auto-079027"
	I1216 03:32:07.622349   43066 host.go:66] Checking if "auto-079027" exists ...
	I1216 03:32:07.622353   43066 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-079027"
	I1216 03:32:07.623374   43066 out.go:179] * Verifying Kubernetes components...
	I1216 03:32:07.624712   43066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:32:07.624769   43066 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:32:07.625955   43066 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:32:07.625972   43066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:32:07.626093   43066 addons.go:239] Setting addon default-storageclass=true in "auto-079027"
	I1216 03:32:07.626129   43066 host.go:66] Checking if "auto-079027" exists ...
	I1216 03:32:07.627939   43066 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:32:07.627958   43066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:32:07.629306   43066 main.go:143] libmachine: domain auto-079027 has defined MAC address 52:54:00:0b:f1:e9 in network mk-auto-079027
	I1216 03:32:07.629760   43066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:f1:e9", ip: ""} in network mk-auto-079027: {Iface:virbr2 ExpiryTime:2025-12-16 04:31:41 +0000 UTC Type:0 Mac:52:54:00:0b:f1:e9 Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:auto-079027 Clientid:01:52:54:00:0b:f1:e9}
	I1216 03:32:07.629794   43066 main.go:143] libmachine: domain auto-079027 has defined IP address 192.168.50.67 and MAC address 52:54:00:0b:f1:e9 in network mk-auto-079027
	I1216 03:32:07.629991   43066 sshutil.go:53] new ssh client: &{IP:192.168.50.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/auto-079027/id_rsa Username:docker}
	I1216 03:32:07.630672   43066 main.go:143] libmachine: domain auto-079027 has defined MAC address 52:54:00:0b:f1:e9 in network mk-auto-079027
	I1216 03:32:07.631112   43066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:f1:e9", ip: ""} in network mk-auto-079027: {Iface:virbr2 ExpiryTime:2025-12-16 04:31:41 +0000 UTC Type:0 Mac:52:54:00:0b:f1:e9 Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:auto-079027 Clientid:01:52:54:00:0b:f1:e9}
	I1216 03:32:07.631134   43066 main.go:143] libmachine: domain auto-079027 has defined IP address 192.168.50.67 and MAC address 52:54:00:0b:f1:e9 in network mk-auto-079027
	I1216 03:32:07.631317   43066 sshutil.go:53] new ssh client: &{IP:192.168.50.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/auto-079027/id_rsa Username:docker}
	I1216 03:32:07.833455   43066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:32:07.944624   43066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:32:08.026027   43066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:32:08.237891   43066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:32:08.526725   43066 start.go:977] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1216 03:32:08.528052   43066 node_ready.go:35] waiting up to 15m0s for node "auto-079027" to be "Ready" ...
	I1216 03:32:08.553608   43066 node_ready.go:49] node "auto-079027" is "Ready"
	I1216 03:32:08.553643   43066 node_ready.go:38] duration metric: took 25.542756ms for node "auto-079027" to be "Ready" ...
	I1216 03:32:08.553659   43066 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:32:08.553720   43066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:32:08.931894   43066 api_server.go:72] duration metric: took 1.309778359s to wait for apiserver process to appear ...
	I1216 03:32:08.931939   43066 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:32:08.931957   43066 api_server.go:253] Checking apiserver healthz at https://192.168.50.67:8443/healthz ...
	I1216 03:32:08.948970   43066 api_server.go:279] https://192.168.50.67:8443/healthz returned 200:
	ok
	I1216 03:32:08.951488   43066 api_server.go:141] control plane version: v1.34.2
	I1216 03:32:08.951521   43066 api_server.go:131] duration metric: took 19.572937ms to wait for apiserver health ...
	I1216 03:32:08.951534   43066 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:32:08.974234   43066 system_pods.go:59] 8 kube-system pods found
	I1216 03:32:08.974278   43066 system_pods.go:61] "coredns-66bc5c9577-gqnr7" [bc517995-7ee3-4135-92b3-d5b4465a51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:08.974293   43066 system_pods.go:61] "coredns-66bc5c9577-tf8wg" [4a564d18-d8e4-4a87-aad0-6ce2e6d936e2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:08.974305   43066 system_pods.go:61] "etcd-auto-079027" [823d98fd-8545-4d70-9d38-5d96c9a2c02d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:32:08.974314   43066 system_pods.go:61] "kube-apiserver-auto-079027" [09971dd6-36ae-4e94-89f5-69e70ab7bb20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:32:08.974320   43066 system_pods.go:61] "kube-controller-manager-auto-079027" [d8e22d56-0a02-448f-a82d-df8d8b1c143c] Running
	I1216 03:32:08.974328   43066 system_pods.go:61] "kube-proxy-z27dv" [67e5f5e9-bf74-45fe-b739-c9a0d1a645f2] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 03:32:08.974335   43066 system_pods.go:61] "kube-scheduler-auto-079027" [4721e71a-d439-4362-bab7-0a00dae17433] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:32:08.974354   43066 system_pods.go:61] "storage-provisioner" [dee19316-4e78-49d3-934f-0235f1cac50d] Pending
	I1216 03:32:08.974367   43066 system_pods.go:74] duration metric: took 22.825585ms to wait for pod list to return data ...
	I1216 03:32:08.974374   43066 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:32:08.985944   43066 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:32:07.055478   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:07.056241   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:07.056259   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:07.056617   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:07.056647   43455 retry.go:31] will retry after 910.76854ms: waiting for domain to come up
	I1216 03:32:07.969280   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:07.970004   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:07.970023   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:07.970460   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:07.970497   43455 retry.go:31] will retry after 1.364536663s: waiting for domain to come up
	I1216 03:32:09.336440   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:09.337287   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:09.337309   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:09.337740   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:09.337783   43455 retry.go:31] will retry after 1.638483619s: waiting for domain to come up
	I1216 03:32:10.977318   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:10.978137   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:10.978155   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:10.978635   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:10.978670   43455 retry.go:31] will retry after 1.809483931s: waiting for domain to come up
	I1216 03:32:08.986281   43066 default_sa.go:45] found service account: "default"
	I1216 03:32:08.986304   43066 default_sa.go:55] duration metric: took 11.922733ms for default service account to be created ...
	I1216 03:32:08.986319   43066 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:32:08.987122   43066 addons.go:530] duration metric: took 1.36495172s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:32:08.993395   43066 system_pods.go:86] 8 kube-system pods found
	I1216 03:32:08.993428   43066 system_pods.go:89] "coredns-66bc5c9577-gqnr7" [bc517995-7ee3-4135-92b3-d5b4465a51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:08.993439   43066 system_pods.go:89] "coredns-66bc5c9577-tf8wg" [4a564d18-d8e4-4a87-aad0-6ce2e6d936e2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:08.993449   43066 system_pods.go:89] "etcd-auto-079027" [823d98fd-8545-4d70-9d38-5d96c9a2c02d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:32:08.993462   43066 system_pods.go:89] "kube-apiserver-auto-079027" [09971dd6-36ae-4e94-89f5-69e70ab7bb20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:32:08.993473   43066 system_pods.go:89] "kube-controller-manager-auto-079027" [d8e22d56-0a02-448f-a82d-df8d8b1c143c] Running
	I1216 03:32:08.993485   43066 system_pods.go:89] "kube-proxy-z27dv" [67e5f5e9-bf74-45fe-b739-c9a0d1a645f2] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 03:32:08.993497   43066 system_pods.go:89] "kube-scheduler-auto-079027" [4721e71a-d439-4362-bab7-0a00dae17433] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:32:08.993534   43066 system_pods.go:89] "storage-provisioner" [dee19316-4e78-49d3-934f-0235f1cac50d] Pending
	I1216 03:32:08.993579   43066 retry.go:31] will retry after 265.607367ms: missing components: kube-dns, kube-proxy
	I1216 03:32:09.032323   43066 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-079027" context rescaled to 1 replicas
	I1216 03:32:09.268648   43066 system_pods.go:86] 8 kube-system pods found
	I1216 03:32:09.268691   43066 system_pods.go:89] "coredns-66bc5c9577-gqnr7" [bc517995-7ee3-4135-92b3-d5b4465a51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:09.268702   43066 system_pods.go:89] "coredns-66bc5c9577-tf8wg" [4a564d18-d8e4-4a87-aad0-6ce2e6d936e2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:09.268712   43066 system_pods.go:89] "etcd-auto-079027" [823d98fd-8545-4d70-9d38-5d96c9a2c02d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:32:09.268721   43066 system_pods.go:89] "kube-apiserver-auto-079027" [09971dd6-36ae-4e94-89f5-69e70ab7bb20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:32:09.268729   43066 system_pods.go:89] "kube-controller-manager-auto-079027" [d8e22d56-0a02-448f-a82d-df8d8b1c143c] Running
	I1216 03:32:09.268746   43066 system_pods.go:89] "kube-proxy-z27dv" [67e5f5e9-bf74-45fe-b739-c9a0d1a645f2] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 03:32:09.268758   43066 system_pods.go:89] "kube-scheduler-auto-079027" [4721e71a-d439-4362-bab7-0a00dae17433] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:32:09.268768   43066 system_pods.go:89] "storage-provisioner" [dee19316-4e78-49d3-934f-0235f1cac50d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:32:09.268794   43066 retry.go:31] will retry after 280.2749ms: missing components: kube-dns, kube-proxy
	I1216 03:32:09.556471   43066 system_pods.go:86] 8 kube-system pods found
	I1216 03:32:09.556515   43066 system_pods.go:89] "coredns-66bc5c9577-gqnr7" [bc517995-7ee3-4135-92b3-d5b4465a51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:09.556526   43066 system_pods.go:89] "coredns-66bc5c9577-tf8wg" [4a564d18-d8e4-4a87-aad0-6ce2e6d936e2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:09.556535   43066 system_pods.go:89] "etcd-auto-079027" [823d98fd-8545-4d70-9d38-5d96c9a2c02d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:32:09.556544   43066 system_pods.go:89] "kube-apiserver-auto-079027" [09971dd6-36ae-4e94-89f5-69e70ab7bb20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:32:09.556555   43066 system_pods.go:89] "kube-controller-manager-auto-079027" [d8e22d56-0a02-448f-a82d-df8d8b1c143c] Running
	I1216 03:32:09.556563   43066 system_pods.go:89] "kube-proxy-z27dv" [67e5f5e9-bf74-45fe-b739-c9a0d1a645f2] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 03:32:09.556574   43066 system_pods.go:89] "kube-scheduler-auto-079027" [4721e71a-d439-4362-bab7-0a00dae17433] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:32:09.556582   43066 system_pods.go:89] "storage-provisioner" [dee19316-4e78-49d3-934f-0235f1cac50d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:32:09.556604   43066 retry.go:31] will retry after 450.685399ms: missing components: kube-dns, kube-proxy
	I1216 03:32:10.013349   43066 system_pods.go:86] 8 kube-system pods found
	I1216 03:32:10.013382   43066 system_pods.go:89] "coredns-66bc5c9577-gqnr7" [bc517995-7ee3-4135-92b3-d5b4465a51fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:10.013394   43066 system_pods.go:89] "coredns-66bc5c9577-tf8wg" [4a564d18-d8e4-4a87-aad0-6ce2e6d936e2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:10.013404   43066 system_pods.go:89] "etcd-auto-079027" [823d98fd-8545-4d70-9d38-5d96c9a2c02d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:32:10.013412   43066 system_pods.go:89] "kube-apiserver-auto-079027" [09971dd6-36ae-4e94-89f5-69e70ab7bb20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:32:10.013418   43066 system_pods.go:89] "kube-controller-manager-auto-079027" [d8e22d56-0a02-448f-a82d-df8d8b1c143c] Running
	I1216 03:32:10.013425   43066 system_pods.go:89] "kube-proxy-z27dv" [67e5f5e9-bf74-45fe-b739-c9a0d1a645f2] Running
	I1216 03:32:10.013432   43066 system_pods.go:89] "kube-scheduler-auto-079027" [4721e71a-d439-4362-bab7-0a00dae17433] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:32:10.013437   43066 system_pods.go:89] "storage-provisioner" [dee19316-4e78-49d3-934f-0235f1cac50d] Running
	I1216 03:32:10.013458   43066 system_pods.go:126] duration metric: took 1.02712819s to wait for k8s-apps to be running ...
	I1216 03:32:10.013471   43066 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:32:10.013523   43066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:32:10.036461   43066 system_svc.go:56] duration metric: took 22.982493ms WaitForService to wait for kubelet
	I1216 03:32:10.036488   43066 kubeadm.go:587] duration metric: took 2.414375877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:32:10.036510   43066 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:32:10.040949   43066 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 03:32:10.040970   43066 node_conditions.go:123] node cpu capacity is 2
	I1216 03:32:10.040985   43066 node_conditions.go:105] duration metric: took 4.468358ms to run NodePressure ...
	I1216 03:32:10.040997   43066 start.go:242] waiting for startup goroutines ...
	I1216 03:32:10.041007   43066 start.go:247] waiting for cluster config update ...
	I1216 03:32:10.041020   43066 start.go:256] writing updated cluster config ...
	I1216 03:32:10.041279   43066 ssh_runner.go:195] Run: rm -f paused
	I1216 03:32:10.046324   43066 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:32:10.050365   43066 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gqnr7" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:12.790481   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:12.791225   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:12.791242   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:12.791559   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:12.791595   43455 retry.go:31] will retry after 2.685854796s: waiting for domain to come up
	I1216 03:32:15.479865   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:15.480463   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:15.480482   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:15.480832   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:15.480867   43455 retry.go:31] will retry after 3.260389682s: waiting for domain to come up
	W1216 03:32:12.058163   43066 pod_ready.go:104] pod "coredns-66bc5c9577-gqnr7" is not "Ready", error: <nil>
	W1216 03:32:14.557817   43066 pod_ready.go:104] pod "coredns-66bc5c9577-gqnr7" is not "Ready", error: <nil>
	I1216 03:32:15.842752   43267 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7 7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48 1603771fc3bb957264f0e0f69af13f99d97b23a395e946185f6f8606e58076e3 1e1943eab5540b2524451fd9149e5bdb48d529525992e4427f98907806a8120c 6af7add7fe9690f182b65bb9686b7a53b05509c0da8b8a4bc1586130f494913d b8b867c1bdad08e7d4b11632415fa3ce077d2fbbc7d269b68165673c2c8ac4e0 516464b244c242de261bac3b8cbd3e0bbc298412c90d2700a0cf9253021faa00 a8bc982e97375733c6a6884402ec35e3c9d903a482fa1c0cec72a4d3d95e8461 2e96f0cb1410c8109bf609900229a88bc8162f92f8318a2e7cbf083b31cd0050 5625f27f367a7d7555860919ccfc373315df2bc1a1c3689aed6a359f22d5b62d 25cafb4681eab4cf7f0278530b5be09e38e3155ff5120fbadabb938d0b14882e 4a2bb8ba97dd0b3e5c3aa3b73fbbffd8d773e5fdd2227b6986d6e3c38cea3f16: (20.285511125s)
	W1216 03:32:15.842831   43267 kubeadm.go:649] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7 7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48 1603771fc3bb957264f0e0f69af13f99d97b23a395e946185f6f8606e58076e3 1e1943eab5540b2524451fd9149e5bdb48d529525992e4427f98907806a8120c 6af7add7fe9690f182b65bb9686b7a53b05509c0da8b8a4bc1586130f494913d b8b867c1bdad08e7d4b11632415fa3ce077d2fbbc7d269b68165673c2c8ac4e0 516464b244c242de261bac3b8cbd3e0bbc298412c90d2700a0cf9253021faa00 a8bc982e97375733c6a6884402ec35e3c9d903a482fa1c0cec72a4d3d95e8461 2e96f0cb1410c8109bf609900229a88bc8162f92f8318a2e7cbf083b31cd0050 5625f27f367a7d7555860919ccfc373315df2bc1a1c3689aed6a359f22d5b62d 25cafb4681eab4cf7f0278530b5be09e38e3155ff5120fbadabb938d0b14882e 4a2bb8ba97dd0b3e5c3aa3b73fbbffd8d773e5fdd2227b6986d6e3c38cea3f16: Process exited with status 1
	stdout:
	c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7
	7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48
	1603771fc3bb957264f0e0f69af13f99d97b23a395e946185f6f8606e58076e3
	1e1943eab5540b2524451fd9149e5bdb48d529525992e4427f98907806a8120c
	6af7add7fe9690f182b65bb9686b7a53b05509c0da8b8a4bc1586130f494913d
	b8b867c1bdad08e7d4b11632415fa3ce077d2fbbc7d269b68165673c2c8ac4e0
	
	stderr:
	E1216 03:32:15.836790    3648 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"516464b244c242de261bac3b8cbd3e0bbc298412c90d2700a0cf9253021faa00\": container with ID starting with 516464b244c242de261bac3b8cbd3e0bbc298412c90d2700a0cf9253021faa00 not found: ID does not exist" containerID="516464b244c242de261bac3b8cbd3e0bbc298412c90d2700a0cf9253021faa00"
	time="2025-12-16T03:32:15Z" level=fatal msg="stopping the container \"516464b244c242de261bac3b8cbd3e0bbc298412c90d2700a0cf9253021faa00\": rpc error: code = NotFound desc = could not find container \"516464b244c242de261bac3b8cbd3e0bbc298412c90d2700a0cf9253021faa00\": container with ID starting with 516464b244c242de261bac3b8cbd3e0bbc298412c90d2700a0cf9253021faa00 not found: ID does not exist"
	I1216 03:32:15.842898   43267 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 03:32:15.873520   43267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:32:15.885146   43267 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 03:30 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5641 Dec 16 03:30 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1953 Dec 16 03:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5585 Dec 16 03:30 /etc/kubernetes/scheduler.conf
	
	I1216 03:32:15.885212   43267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:32:15.896375   43267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:32:15.906340   43267 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:32:15.906402   43267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:32:15.920974   43267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:32:15.932246   43267 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:32:15.932299   43267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:32:15.943220   43267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:32:15.954079   43267 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:32:15.954124   43267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:32:15.965404   43267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:32:15.976500   43267 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:32:16.029236   43267 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:32:17.547254   43267 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.517980774s)
	I1216 03:32:17.547370   43267 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:32:17.809942   43267 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:32:17.863496   43267 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:32:17.960424   43267 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:32:17.960513   43267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:32:18.460632   43267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:32:18.961506   43267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:32:18.998030   43267 api_server.go:72] duration metric: took 1.037618832s to wait for apiserver process to appear ...
	I1216 03:32:18.998055   43267 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:32:18.998077   43267 api_server.go:253] Checking apiserver healthz at https://192.168.83.23:8443/healthz ...
	I1216 03:32:18.742617   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:18.743573   43455 main.go:143] libmachine: domain kindnet-079027 has current primary IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:18.743603   43455 main.go:143] libmachine: found domain IP: 192.168.72.85
	I1216 03:32:18.743614   43455 main.go:143] libmachine: reserving static IP address...
	I1216 03:32:18.744127   43455 main.go:143] libmachine: unable to find host DHCP lease matching {name: "kindnet-079027", mac: "52:54:00:0f:e2:b0", ip: "192.168.72.85"} in network mk-kindnet-079027
	I1216 03:32:18.975520   43455 main.go:143] libmachine: reserved static IP address 192.168.72.85 for domain kindnet-079027
	I1216 03:32:18.975548   43455 main.go:143] libmachine: waiting for SSH...
	I1216 03:32:18.975557   43455 main.go:143] libmachine: Getting to WaitForSSH function...
	I1216 03:32:18.979205   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:18.979711   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:18.979748   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:18.980128   43455 main.go:143] libmachine: Using SSH client type: native
	I1216 03:32:18.980461   43455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.85 22 <nil> <nil>}
	I1216 03:32:18.980476   43455 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1216 03:32:19.107825   43455 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:32:19.108200   43455 main.go:143] libmachine: domain creation complete
	I1216 03:32:19.110070   43455 machine.go:94] provisionDockerMachine start ...
	I1216 03:32:19.112758   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.113296   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:19.113327   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.113519   43455 main.go:143] libmachine: Using SSH client type: native
	I1216 03:32:19.113839   43455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.85 22 <nil> <nil>}
	I1216 03:32:19.113855   43455 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:32:19.230473   43455 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 03:32:19.230504   43455 buildroot.go:166] provisioning hostname "kindnet-079027"
	I1216 03:32:19.233886   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.234368   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:19.234391   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.234570   43455 main.go:143] libmachine: Using SSH client type: native
	I1216 03:32:19.234814   43455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.85 22 <nil> <nil>}
	I1216 03:32:19.234835   43455 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-079027 && echo "kindnet-079027" | sudo tee /etc/hostname
	I1216 03:32:19.367556   43455 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-079027
	
	I1216 03:32:19.370722   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.371412   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:19.371446   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.371642   43455 main.go:143] libmachine: Using SSH client type: native
	I1216 03:32:19.371940   43455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.85 22 <nil> <nil>}
	I1216 03:32:19.371967   43455 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-079027' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-079027/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-079027' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:32:19.496996   43455 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:32:19.497027   43455 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5036/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5036/.minikube}
	I1216 03:32:19.497077   43455 buildroot.go:174] setting up certificates
	I1216 03:32:19.497090   43455 provision.go:84] configureAuth start
	I1216 03:32:19.500614   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.501180   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:19.501219   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.504096   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.504566   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:19.504593   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.504751   43455 provision.go:143] copyHostCerts
	I1216 03:32:19.504828   43455 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5036/.minikube/ca.pem, removing ...
	I1216 03:32:19.504853   43455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5036/.minikube/ca.pem
	I1216 03:32:19.504940   43455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/ca.pem (1078 bytes)
	I1216 03:32:19.505075   43455 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5036/.minikube/cert.pem, removing ...
	I1216 03:32:19.505082   43455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5036/.minikube/cert.pem
	I1216 03:32:19.505123   43455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/cert.pem (1123 bytes)
	I1216 03:32:19.505193   43455 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5036/.minikube/key.pem, removing ...
	I1216 03:32:19.505199   43455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5036/.minikube/key.pem
	I1216 03:32:19.505230   43455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/key.pem (1679 bytes)
	I1216 03:32:19.505335   43455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem org=jenkins.kindnet-079027 san=[127.0.0.1 192.168.72.85 kindnet-079027 localhost minikube]
	I1216 03:32:19.604575   43455 provision.go:177] copyRemoteCerts
	I1216 03:32:19.604648   43455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:32:19.607914   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.608410   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:19.608448   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.608622   43455 sshutil.go:53] new ssh client: &{IP:192.168.72.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/id_rsa Username:docker}
	I1216 03:32:19.699107   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:32:19.727701   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1216 03:32:19.755673   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 03:32:19.787747   43455 provision.go:87] duration metric: took 290.636286ms to configureAuth
	I1216 03:32:19.787778   43455 buildroot.go:189] setting minikube options for container-runtime
	I1216 03:32:19.788022   43455 config.go:182] Loaded profile config "kindnet-079027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:32:19.791641   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.792132   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:19.792169   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.792361   43455 main.go:143] libmachine: Using SSH client type: native
	I1216 03:32:19.792650   43455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.85 22 <nil> <nil>}
	I1216 03:32:19.792677   43455 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:32:20.090463   43455 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:32:20.090490   43455 machine.go:97] duration metric: took 980.39954ms to provisionDockerMachine
	I1216 03:32:20.090500   43455 client.go:176] duration metric: took 18.683858332s to LocalClient.Create
	I1216 03:32:20.090518   43455 start.go:167] duration metric: took 18.683913531s to libmachine.API.Create "kindnet-079027"
	I1216 03:32:20.090526   43455 start.go:293] postStartSetup for "kindnet-079027" (driver="kvm2")
	I1216 03:32:20.090537   43455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:32:20.090605   43455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:32:20.094103   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.094620   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:20.094653   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.094826   43455 sshutil.go:53] new ssh client: &{IP:192.168.72.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/id_rsa Username:docker}
	I1216 03:32:20.187787   43455 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:32:20.193116   43455 info.go:137] Remote host: Buildroot 2025.02
	I1216 03:32:20.193155   43455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5036/.minikube/addons for local assets ...
	I1216 03:32:20.193240   43455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5036/.minikube/files for local assets ...
	I1216 03:32:20.193317   43455 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/ssl/certs/89742.pem -> 89742.pem in /etc/ssl/certs
	I1216 03:32:20.193414   43455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:32:20.205740   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/ssl/certs/89742.pem --> /etc/ssl/certs/89742.pem (1708 bytes)
	I1216 03:32:20.242317   43455 start.go:296] duration metric: took 151.777203ms for postStartSetup
	I1216 03:32:20.245949   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.246397   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:20.246427   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.246658   43455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/config.json ...
	I1216 03:32:20.246917   43455 start.go:128] duration metric: took 18.84190913s to createHost
	I1216 03:32:20.249571   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.250014   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:20.250044   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.250272   43455 main.go:143] libmachine: Using SSH client type: native
	I1216 03:32:20.250506   43455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.85 22 <nil> <nil>}
	I1216 03:32:20.250519   43455 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1216 03:32:20.363378   43455 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765855940.319446779
	
	I1216 03:32:20.363406   43455 fix.go:216] guest clock: 1765855940.319446779
	I1216 03:32:20.363417   43455 fix.go:229] Guest: 2025-12-16 03:32:20.319446779 +0000 UTC Remote: 2025-12-16 03:32:20.246959246 +0000 UTC m=+18.971854705 (delta=72.487533ms)
	I1216 03:32:20.363438   43455 fix.go:200] guest clock delta is within tolerance: 72.487533ms
	I1216 03:32:20.363445   43455 start.go:83] releasing machines lock for "kindnet-079027", held for 18.958567245s
	I1216 03:32:20.367215   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.367673   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:20.367721   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.368284   43455 ssh_runner.go:195] Run: cat /version.json
	I1216 03:32:20.368427   43455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:32:20.371233   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.371451   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.371681   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:20.371711   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.371872   43455 sshutil.go:53] new ssh client: &{IP:192.168.72.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/id_rsa Username:docker}
	I1216 03:32:20.371890   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:20.371914   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.372140   43455 sshutil.go:53] new ssh client: &{IP:192.168.72.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/id_rsa Username:docker}
	I1216 03:32:20.458434   43455 ssh_runner.go:195] Run: systemctl --version
	I1216 03:32:20.496263   43455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:32:20.659125   43455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:32:20.667988   43455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:32:20.668091   43455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:32:20.693406   43455 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:32:20.693430   43455 start.go:496] detecting cgroup driver to use...
	I1216 03:32:20.693490   43455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:32:20.718895   43455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:32:20.740447   43455 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:32:20.740523   43455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:32:20.759340   43455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:32:20.775521   43455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:32:20.923067   43455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:32:21.139033   43455 docker.go:234] disabling docker service ...
	I1216 03:32:21.139090   43455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:32:21.159159   43455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:32:21.174409   43455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	W1216 03:32:17.055695   43066 pod_ready.go:104] pod "coredns-66bc5c9577-gqnr7" is not "Ready", error: <nil>
	W1216 03:32:19.058037   43066 pod_ready.go:104] pod "coredns-66bc5c9577-gqnr7" is not "Ready", error: <nil>
	I1216 03:32:20.053865   43066 pod_ready.go:99] pod "coredns-66bc5c9577-gqnr7" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-gqnr7" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-gqnr7" not found
	I1216 03:32:20.053898   43066 pod_ready.go:86] duration metric: took 10.003512376s for pod "coredns-66bc5c9577-gqnr7" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:20.053912   43066 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tf8wg" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:21.339533   43455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:32:21.497663   43455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:32:21.517067   43455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:32:21.540339   43455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 03:32:21.540404   43455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:32:21.553626   43455 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 03:32:21.553686   43455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:32:21.566876   43455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:32:21.584663   43455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:32:21.598076   43455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:32:21.612749   43455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:32:21.624974   43455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:32:21.647637   43455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:32:21.665040   43455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:32:21.677511   43455 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 03:32:21.677573   43455 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 03:32:21.698377   43455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:32:21.709296   43455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:32:21.892311   43455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 03:32:22.005148   43455 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:32:22.005212   43455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:32:22.010549   43455 start.go:564] Will wait 60s for crictl version
	I1216 03:32:22.010647   43455 ssh_runner.go:195] Run: which crictl
	I1216 03:32:22.014801   43455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 03:32:22.062052   43455 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 03:32:22.062126   43455 ssh_runner.go:195] Run: crio --version
	I1216 03:32:22.099763   43455 ssh_runner.go:195] Run: crio --version
	I1216 03:32:22.135429   43455 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1216 03:32:21.356991   43267 api_server.go:279] https://192.168.83.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 03:32:21.357020   43267 api_server.go:103] status: https://192.168.83.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 03:32:21.357037   43267 api_server.go:253] Checking apiserver healthz at https://192.168.83.23:8443/healthz ...
	I1216 03:32:21.429878   43267 api_server.go:279] https://192.168.83.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 03:32:21.429909   43267 api_server.go:103] status: https://192.168.83.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 03:32:21.499130   43267 api_server.go:253] Checking apiserver healthz at https://192.168.83.23:8443/healthz ...
	I1216 03:32:21.510682   43267 api_server.go:279] https://192.168.83.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:32:21.510723   43267 api_server.go:103] status: https://192.168.83.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:32:21.998207   43267 api_server.go:253] Checking apiserver healthz at https://192.168.83.23:8443/healthz ...
	I1216 03:32:22.006379   43267 api_server.go:279] https://192.168.83.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:32:22.006413   43267 api_server.go:103] status: https://192.168.83.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:32:22.499072   43267 api_server.go:253] Checking apiserver healthz at https://192.168.83.23:8443/healthz ...
	I1216 03:32:22.524538   43267 api_server.go:279] https://192.168.83.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:32:22.524596   43267 api_server.go:103] status: https://192.168.83.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:32:22.999070   43267 api_server.go:253] Checking apiserver healthz at https://192.168.83.23:8443/healthz ...
	I1216 03:32:23.008079   43267 api_server.go:279] https://192.168.83.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:32:23.008108   43267 api_server.go:103] status: https://192.168.83.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:32:23.498369   43267 api_server.go:253] Checking apiserver healthz at https://192.168.83.23:8443/healthz ...
	I1216 03:32:23.503271   43267 api_server.go:279] https://192.168.83.23:8443/healthz returned 200:
	ok
	I1216 03:32:23.516104   43267 api_server.go:141] control plane version: v1.34.2
	I1216 03:32:23.516126   43267 api_server.go:131] duration metric: took 4.518063625s to wait for apiserver health ...
	I1216 03:32:23.516177   43267 cni.go:84] Creating CNI manager for ""
	I1216 03:32:23.516185   43267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 03:32:23.518245   43267 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 03:32:23.523078   43267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 03:32:23.548919   43267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 03:32:23.587680   43267 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:32:23.597672   43267 system_pods.go:59] 6 kube-system pods found
	I1216 03:32:23.597725   43267 system_pods.go:61] "coredns-66bc5c9577-rcwxg" [b4c343db-7dab-4de5-89f2-ce2687b6631f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:23.597738   43267 system_pods.go:61] "etcd-pause-127368" [e387d448-2b77-40c5-a65b-335fb7902fd5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:32:23.597760   43267 system_pods.go:61] "kube-apiserver-pause-127368" [50ced11f-9adb-413d-a3f9-02a8e4e1e331] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:32:23.597775   43267 system_pods.go:61] "kube-controller-manager-pause-127368" [879a4960-022e-4682-87bd-30d9240d52ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 03:32:23.597785   43267 system_pods.go:61] "kube-proxy-6tst4" [c5bc773a-8ef2-4f79-bdd3-ead643257601] Running
	I1216 03:32:23.597797   43267 system_pods.go:61] "kube-scheduler-pause-127368" [a9aa83b7-fd6e-4ff0-b3d3-d8d9b8111355] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:32:23.597807   43267 system_pods.go:74] duration metric: took 10.106234ms to wait for pod list to return data ...
	I1216 03:32:23.597826   43267 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:32:23.606449   43267 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 03:32:23.606483   43267 node_conditions.go:123] node cpu capacity is 2
	I1216 03:32:23.606499   43267 node_conditions.go:105] duration metric: took 8.666246ms to run NodePressure ...
	I1216 03:32:23.606559   43267 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:32:23.887586   43267 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1216 03:32:23.892341   43267 kubeadm.go:744] kubelet initialised
	I1216 03:32:23.892367   43267 kubeadm.go:745] duration metric: took 4.755865ms waiting for restarted kubelet to initialise ...
	I1216 03:32:23.892386   43267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:32:23.916831   43267 ops.go:34] apiserver oom_adj: -16
	I1216 03:32:23.916857   43267 kubeadm.go:602] duration metric: took 28.448824281s to restartPrimaryControlPlane
	I1216 03:32:23.916877   43267 kubeadm.go:403] duration metric: took 28.607885622s to StartCluster
	I1216 03:32:23.916897   43267 settings.go:142] acquiring lock: {Name:mk546ecdfe1860ae68a814905b53e6453298b4fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:23.917007   43267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 03:32:23.918534   43267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/kubeconfig: {Name:mk6832d71ef0ad581fa898dceefc2fcc2fd665b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:23.918796   43267 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.23 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:32:23.918948   43267 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:32:23.919093   43267 config.go:182] Loaded profile config "pause-127368": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:32:23.920505   43267 out.go:179] * Enabled addons: 
	I1216 03:32:23.920507   43267 out.go:179] * Verifying Kubernetes components...
	I1216 03:32:23.921643   43267 addons.go:530] duration metric: took 2.721553ms for enable addons: enabled=[]
	I1216 03:32:23.921674   43267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:32:24.153042   43267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:32:24.178150   43267 node_ready.go:35] waiting up to 6m0s for node "pause-127368" to be "Ready" ...
	I1216 03:32:24.182356   43267 node_ready.go:49] node "pause-127368" is "Ready"
	I1216 03:32:24.182381   43267 node_ready.go:38] duration metric: took 4.196221ms for node "pause-127368" to be "Ready" ...
	I1216 03:32:24.182396   43267 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:32:24.182451   43267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:32:24.205367   43267 api_server.go:72] duration metric: took 286.528322ms to wait for apiserver process to appear ...
	I1216 03:32:24.205402   43267 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:32:24.205427   43267 api_server.go:253] Checking apiserver healthz at https://192.168.83.23:8443/healthz ...
	I1216 03:32:24.212509   43267 api_server.go:279] https://192.168.83.23:8443/healthz returned 200:
	ok
	I1216 03:32:24.213670   43267 api_server.go:141] control plane version: v1.34.2
	I1216 03:32:24.213700   43267 api_server.go:131] duration metric: took 8.289431ms to wait for apiserver health ...
	I1216 03:32:24.213713   43267 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:32:24.218182   43267 system_pods.go:59] 6 kube-system pods found
	I1216 03:32:24.218212   43267 system_pods.go:61] "coredns-66bc5c9577-rcwxg" [b4c343db-7dab-4de5-89f2-ce2687b6631f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:24.218224   43267 system_pods.go:61] "etcd-pause-127368" [e387d448-2b77-40c5-a65b-335fb7902fd5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:32:24.218233   43267 system_pods.go:61] "kube-apiserver-pause-127368" [50ced11f-9adb-413d-a3f9-02a8e4e1e331] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:32:24.218242   43267 system_pods.go:61] "kube-controller-manager-pause-127368" [879a4960-022e-4682-87bd-30d9240d52ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 03:32:24.218250   43267 system_pods.go:61] "kube-proxy-6tst4" [c5bc773a-8ef2-4f79-bdd3-ead643257601] Running
	I1216 03:32:24.218257   43267 system_pods.go:61] "kube-scheduler-pause-127368" [a9aa83b7-fd6e-4ff0-b3d3-d8d9b8111355] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:32:24.218265   43267 system_pods.go:74] duration metric: took 4.54413ms to wait for pod list to return data ...
	I1216 03:32:24.218279   43267 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:32:24.220906   43267 default_sa.go:45] found service account: "default"
	I1216 03:32:24.220947   43267 default_sa.go:55] duration metric: took 2.660134ms for default service account to be created ...
	I1216 03:32:24.220961   43267 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:32:24.223981   43267 system_pods.go:86] 6 kube-system pods found
	I1216 03:32:24.224005   43267 system_pods.go:89] "coredns-66bc5c9577-rcwxg" [b4c343db-7dab-4de5-89f2-ce2687b6631f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:24.224020   43267 system_pods.go:89] "etcd-pause-127368" [e387d448-2b77-40c5-a65b-335fb7902fd5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:32:24.224030   43267 system_pods.go:89] "kube-apiserver-pause-127368" [50ced11f-9adb-413d-a3f9-02a8e4e1e331] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:32:24.224040   43267 system_pods.go:89] "kube-controller-manager-pause-127368" [879a4960-022e-4682-87bd-30d9240d52ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 03:32:24.224046   43267 system_pods.go:89] "kube-proxy-6tst4" [c5bc773a-8ef2-4f79-bdd3-ead643257601] Running
	I1216 03:32:24.224059   43267 system_pods.go:89] "kube-scheduler-pause-127368" [a9aa83b7-fd6e-4ff0-b3d3-d8d9b8111355] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:32:24.224069   43267 system_pods.go:126] duration metric: took 3.097079ms to wait for k8s-apps to be running ...
	I1216 03:32:24.224079   43267 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:32:24.224129   43267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:32:24.245028   43267 system_svc.go:56] duration metric: took 20.938745ms WaitForService to wait for kubelet
	I1216 03:32:24.245058   43267 kubeadm.go:587] duration metric: took 326.227152ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:32:24.245079   43267 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:32:24.248307   43267 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 03:32:24.248334   43267 node_conditions.go:123] node cpu capacity is 2
	I1216 03:32:24.248350   43267 node_conditions.go:105] duration metric: took 3.264349ms to run NodePressure ...
	I1216 03:32:24.248366   43267 start.go:242] waiting for startup goroutines ...
	I1216 03:32:24.248377   43267 start.go:247] waiting for cluster config update ...
	I1216 03:32:24.248388   43267 start.go:256] writing updated cluster config ...
	I1216 03:32:24.248803   43267 ssh_runner.go:195] Run: rm -f paused
	I1216 03:32:24.255589   43267 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:32:24.256694   43267 kapi.go:59] client config for pause-127368: &rest.Config{Host:"https://192.168.83.23:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-5036/.minikube/profiles/pause-127368/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-5036/.minikube/profiles/pause-127368/client.key", CAFile:"/home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 03:32:24.261712   43267 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rcwxg" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:22.139456   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:22.139894   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:22.139949   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:22.140137   43455 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1216 03:32:22.144464   43455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:32:22.160624   43455 kubeadm.go:884] updating cluster {Name:kindnet-079027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.2 ClusterName:kindnet-079027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.85 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:32:22.160723   43455 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:32:22.160774   43455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:32:22.194514   43455 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1216 03:32:22.194573   43455 ssh_runner.go:195] Run: which lz4
	I1216 03:32:22.198588   43455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 03:32:22.203357   43455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 03:32:22.203394   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1216 03:32:23.506673   43455 crio.go:462] duration metric: took 1.308112075s to copy over tarball
	I1216 03:32:23.506737   43455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 03:32:25.096947   43455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.590165303s)
	I1216 03:32:25.096980   43455 crio.go:469] duration metric: took 1.590286184s to extract the tarball
	I1216 03:32:25.096987   43455 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 03:32:25.133489   43455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:32:25.171322   43455 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:32:25.171354   43455 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:32:25.171362   43455 kubeadm.go:935] updating node { 192.168.72.85 8443 v1.34.2 crio true true} ...
	I1216 03:32:25.171457   43455 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-079027 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.85
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-079027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1216 03:32:25.171529   43455 ssh_runner.go:195] Run: crio config
	I1216 03:32:25.217771   43455 cni.go:84] Creating CNI manager for "kindnet"
	I1216 03:32:25.217799   43455 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:32:25.217826   43455 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.85 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-079027 NodeName:kindnet-079027 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.85"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.85 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:32:25.218010   43455 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.85
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-079027"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.85"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.85"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:32:25.218091   43455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 03:32:25.230209   43455 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:32:25.230285   43455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:32:25.241975   43455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1216 03:32:25.261775   43455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 03:32:25.282649   43455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1216 03:32:25.304175   43455 ssh_runner.go:195] Run: grep 192.168.72.85	control-plane.minikube.internal$ /etc/hosts
	I1216 03:32:25.308033   43455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.85	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:32:25.321957   43455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:32:25.468287   43455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:32:25.501615   43455 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027 for IP: 192.168.72.85
	I1216 03:32:25.501642   43455 certs.go:195] generating shared ca certs ...
	I1216 03:32:25.501662   43455 certs.go:227] acquiring lock for ca certs: {Name:mk77e952ddad6d1f2b7d1d07b6d50cdef35b56ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:25.501874   43455 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5036/.minikube/ca.key
	I1216 03:32:25.501957   43455 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.key
	I1216 03:32:25.501976   43455 certs.go:257] generating profile certs ...
	I1216 03:32:25.502052   43455 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.key
	I1216 03:32:25.502081   43455 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt with IP's: []
	I1216 03:32:25.698062   43455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt ...
	I1216 03:32:25.698089   43455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: {Name:mka2d54a423cae6c2ff9c307c3d6506f036e4266 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:25.698278   43455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.key ...
	I1216 03:32:25.698292   43455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.key: {Name:mkace4fcdebb26f91a01a6b40dc1b1edc405d7e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:25.698410   43455 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.key.c1f626c8
	I1216 03:32:25.698427   43455 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.crt.c1f626c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.85]
	I1216 03:32:25.744638   43455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.crt.c1f626c8 ...
	I1216 03:32:25.744660   43455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.crt.c1f626c8: {Name:mka6295fd5b0ff9bc346f24a0f09e16fb82be421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:25.744820   43455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.key.c1f626c8 ...
	I1216 03:32:25.744836   43455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.key.c1f626c8: {Name:mk6a434ca45d7e7bc6a8b0625ecf2d911b7304c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:25.744946   43455 certs.go:382] copying /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.crt.c1f626c8 -> /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.crt
	I1216 03:32:25.745022   43455 certs.go:386] copying /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.key.c1f626c8 -> /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.key
	I1216 03:32:25.745080   43455 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/proxy-client.key
	I1216 03:32:25.745109   43455 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/proxy-client.crt with IP's: []
	I1216 03:32:25.817328   43455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/proxy-client.crt ...
	I1216 03:32:25.817348   43455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/proxy-client.crt: {Name:mkc62020bad3565b7bd4310e95b12e3102eb51f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:25.817513   43455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/proxy-client.key ...
	I1216 03:32:25.817527   43455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/proxy-client.key: {Name:mk8c09ff3034191c5e136db519004cb87d0fc0e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:25.817731   43455 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/8974.pem (1338 bytes)
	W1216 03:32:25.817770   43455 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5036/.minikube/certs/8974_empty.pem, impossibly tiny 0 bytes
	I1216 03:32:25.817780   43455 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:32:25.817802   43455 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:32:25.817832   43455 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:32:25.817858   43455 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem (1679 bytes)
	I1216 03:32:25.817896   43455 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/ssl/certs/89742.pem (1708 bytes)
	I1216 03:32:25.818405   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:32:25.849955   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:32:25.880348   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:32:25.909433   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:32:25.937850   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 03:32:25.964544   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 03:32:25.991908   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:32:26.020772   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 03:32:26.049169   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/ssl/certs/89742.pem --> /usr/share/ca-certificates/89742.pem (1708 bytes)
	I1216 03:32:26.079312   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:32:26.107261   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/certs/8974.pem --> /usr/share/ca-certificates/8974.pem (1338 bytes)
	I1216 03:32:26.134964   43455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:32:26.153192   43455 ssh_runner.go:195] Run: openssl version
	I1216 03:32:26.158940   43455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:32:26.171367   43455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:32:26.183729   43455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:32:26.188837   43455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:32:26.188888   43455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:32:26.195846   43455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:32:26.207621   43455 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:32:26.219316   43455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8974.pem
	I1216 03:32:26.230053   43455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8974.pem /etc/ssl/certs/8974.pem
	I1216 03:32:26.242603   43455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8974.pem
	I1216 03:32:26.247642   43455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:36 /usr/share/ca-certificates/8974.pem
	I1216 03:32:26.247687   43455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8974.pem
	I1216 03:32:26.254285   43455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:32:26.265223   43455 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8974.pem /etc/ssl/certs/51391683.0
	I1216 03:32:26.276018   43455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89742.pem
	I1216 03:32:26.287810   43455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89742.pem /etc/ssl/certs/89742.pem
	I1216 03:32:26.300233   43455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89742.pem
	I1216 03:32:26.305053   43455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:36 /usr/share/ca-certificates/89742.pem
	I1216 03:32:26.305107   43455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89742.pem
	I1216 03:32:26.312056   43455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:32:26.322448   43455 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89742.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:32:26.333274   43455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	W1216 03:32:22.060639   43066 pod_ready.go:104] pod "coredns-66bc5c9577-tf8wg" is not "Ready", error: <nil>
	W1216 03:32:24.061483   43066 pod_ready.go:104] pod "coredns-66bc5c9577-tf8wg" is not "Ready", error: <nil>
	W1216 03:32:26.561121   43066 pod_ready.go:104] pod "coredns-66bc5c9577-tf8wg" is not "Ready", error: <nil>
	I1216 03:32:26.337938   43455 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:32:26.337985   43455 kubeadm.go:401] StartCluster: {Name:kindnet-079027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:kindnet-079027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.85 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:32:26.338047   43455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:32:26.338101   43455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:32:26.371606   43455 cri.go:89] found id: ""
	I1216 03:32:26.371681   43455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:32:26.383120   43455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:32:26.395964   43455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:32:26.407347   43455 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:32:26.407371   43455 kubeadm.go:158] found existing configuration files:
	
	I1216 03:32:26.407411   43455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:32:26.417911   43455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:32:26.417971   43455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:32:26.428610   43455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:32:26.438412   43455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:32:26.438469   43455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:32:26.448950   43455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:32:26.458899   43455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:32:26.458945   43455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:32:26.469333   43455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:32:26.479210   43455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:32:26.479261   43455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:32:26.489956   43455 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 03:32:26.537167   43455 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 03:32:26.537216   43455 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:32:26.630470   43455 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:32:26.630647   43455 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:32:26.630766   43455 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:32:26.642423   43455 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1216 03:32:26.269297   43267 pod_ready.go:104] pod "coredns-66bc5c9577-rcwxg" is not "Ready", error: <nil>
	W1216 03:32:28.767318   43267 pod_ready.go:104] pod "coredns-66bc5c9577-rcwxg" is not "Ready", error: <nil>
	I1216 03:32:26.643958   43455 out.go:252]   - Generating certificates and keys ...
	I1216 03:32:26.644045   43455 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:32:26.644129   43455 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:32:27.110875   43455 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:32:27.797875   43455 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:32:28.010426   43455 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:32:28.386109   43455 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:32:28.831264   43455 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:32:28.831582   43455 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-079027 localhost] and IPs [192.168.72.85 127.0.0.1 ::1]
	I1216 03:32:28.972790   43455 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:32:28.973078   43455 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-079027 localhost] and IPs [192.168.72.85 127.0.0.1 ::1]
	I1216 03:32:29.200041   43455 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:32:29.620065   43455 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:32:29.874239   43455 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:32:29.875035   43455 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:32:30.031559   43455 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:32:30.155587   43455 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:32:30.250334   43455 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:32:30.520222   43455 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:32:30.836257   43455 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:32:30.836392   43455 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:32:30.838520   43455 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:32:30.840167   43455 out.go:252]   - Booting up control plane ...
	I1216 03:32:30.840284   43455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:32:30.840391   43455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:32:30.840483   43455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:32:30.856463   43455 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:32:30.856668   43455 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:32:30.863390   43455 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:32:30.863636   43455 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:32:30.863703   43455 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:32:31.046118   43455 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:32:31.046281   43455 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1216 03:32:29.061836   43066 pod_ready.go:104] pod "coredns-66bc5c9577-tf8wg" is not "Ready", error: <nil>
	W1216 03:32:31.559306   43066 pod_ready.go:104] pod "coredns-66bc5c9577-tf8wg" is not "Ready", error: <nil>
	W1216 03:32:30.768164   43267 pod_ready.go:104] pod "coredns-66bc5c9577-rcwxg" is not "Ready", error: <nil>
	I1216 03:32:32.267695   43267 pod_ready.go:94] pod "coredns-66bc5c9577-rcwxg" is "Ready"
	I1216 03:32:32.267731   43267 pod_ready.go:86] duration metric: took 8.005991377s for pod "coredns-66bc5c9577-rcwxg" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:32.270893   43267 pod_ready.go:83] waiting for pod "etcd-pause-127368" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:32.275755   43267 pod_ready.go:94] pod "etcd-pause-127368" is "Ready"
	I1216 03:32:32.275779   43267 pod_ready.go:86] duration metric: took 4.859537ms for pod "etcd-pause-127368" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:32.278307   43267 pod_ready.go:83] waiting for pod "kube-apiserver-pause-127368" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:33.286540   43267 pod_ready.go:94] pod "kube-apiserver-pause-127368" is "Ready"
	I1216 03:32:33.286577   43267 pod_ready.go:86] duration metric: took 1.008247465s for pod "kube-apiserver-pause-127368" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:33.289695   43267 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-127368" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:33.294699   43267 pod_ready.go:94] pod "kube-controller-manager-pause-127368" is "Ready"
	I1216 03:32:33.294718   43267 pod_ready.go:86] duration metric: took 4.994853ms for pod "kube-controller-manager-pause-127368" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:33.464945   43267 pod_ready.go:83] waiting for pod "kube-proxy-6tst4" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:33.865870   43267 pod_ready.go:94] pod "kube-proxy-6tst4" is "Ready"
	I1216 03:32:33.865902   43267 pod_ready.go:86] duration metric: took 400.925495ms for pod "kube-proxy-6tst4" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:34.065414   43267 pod_ready.go:83] waiting for pod "kube-scheduler-pause-127368" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:35.665155   43267 pod_ready.go:94] pod "kube-scheduler-pause-127368" is "Ready"
	I1216 03:32:35.665184   43267 pod_ready.go:86] duration metric: took 1.599744584s for pod "kube-scheduler-pause-127368" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:35.665198   43267 pod_ready.go:40] duration metric: took 11.409563586s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:32:35.709029   43267 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:32:35.710544   43267 out.go:179] * Done! kubectl is now configured to use "pause-127368" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.362360898Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765855956362335971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8ac7015f-23b5-4c5e-b4cf-71892f7e8167 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.363434919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e40f03d9-4ee5-442a-859a-fe2ad082620e name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.363554575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e40f03d9-4ee5-442a-859a-fe2ad082620e name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.363823651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30cbfcf72c599e2f73f84aa02a68c130b8e7b215afd97bea124adf804a1a61cd,PodSandboxId:fa9452a9930b7fa54c0769f7dcd06a96965a9d8992337228ed778d536be4aa2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765855942253086054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tst4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5bc773a-8ef2-4f79-bdd3-ead643257601,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1659127a04c0e595791c31ebcd6c3cedcd9c75aa05195038dc70a120b78024d6,PodSandboxId:9c94e1cb0dd180d945fec246cfca9cddeada9362ad391b96e90bf3637b16b89d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765855942263923773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rcwxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4c343db-7dab-4de5-89f2-ce2687b6631f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb21282af131f6b326f61f602cb404d88d036dbaf91f58041b9d0cc59a457f7,PodSandboxId:dae9ca1923edaf49e343cba4e7db0bb11a35c290c3b22a27e42b7e493908a7be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765855938427364711,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582fb99923c3b4e606630b30dbd77848,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac2603a4581191c10d4b3e10f52b0b12af70b9f64c42741b7205147715c7078,PodSandboxId:18ab7bec90a901e061c388215a8dda583ecb32acea89125b131de26364383a3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:C
ONTAINER_RUNNING,CreatedAt:1765855938448790249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d448dfe87545fc587b352dd2eaa7a763,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c4b91c38409e3b515017c952463970ec783353ede06e6b8d23b2ad4d57fd9b,PodSandboxId:f2de72045bd453f040fc6752a3acecee68ede348807bf0988cb9d01cae15f88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765855938416880990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f172ccbfdd61b8b761a84f05e6d663b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eac74d80c5016151ab3e0030f99826d0bb335fdbaf666b409fc395d37c2c73f,PodSandboxId:7e479d126c3d4b0c4c9a19f1c82ac9acbe2fee3e6b99d562f65f8f7e28db08c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a
5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765855938402228366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d64ac4f35b628e0d630c2501f74195a7,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7,PodSandboxId:9c94e1cb0dd180d945fec246cfca9cddeada9362ad391b
96e90bf3637b16b89d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765855914547417482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rcwxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4c343db-7dab-4de5-89f2-ce2687b6631f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48,PodSandboxId:fa9452a9930b7fa54c0769f7dcd06a96965a9d8992337228ed778d536be4aa2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765855913538605352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tst4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5bc773a-8ef2-4f79-bdd3-ead643257601,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b867c1bdad08e7d4b11632415fa3ce077d2fbbc7d269b68165673c2c8ac4e0,PodSandboxId:dae9ca1923edaf49e343cba4e7db0bb11a35c290c3b22a27e42b7e493908a7be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765855913395194273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582fb99923c3b4e606630b30dbd77848,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1603771fc3bb957264f0e0f69af13f99d97b23a395e946185f6f8606e58076e3,PodSandboxId:7e479d126c3d4b0c4c9a19f1c82ac9acbe2fee3e6b99d562f65f8f7e28db08c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765855913432386960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d64ac4f35b628e0d630c2501f74195a7,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1943eab5540b2524451fd9149e5bdb48d529525992e4427f98907806a8120c,PodSandboxId:18ab7bec90a901e061c388215a8dda583ecb32acea89125b131de26364383a3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765855913425395652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d448dfe87545fc587b352dd2eaa7a763,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af7add7fe9690f182b65bb9686b7a53b05509c0da8b8a4bc1586130f494913d,PodSandboxId:f2de72045bd453f040fc6752a3acecee68ede348807bf0988cb9d01cae15f88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765855913412415764,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f172ccbfdd61b8b761a84f05e6d663b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e40f03d9-4ee5-442a-859a-fe2ad082620e name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.406865214Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ab616b3-852c-4d64-9d8d-0ed9cf739334 name=/runtime.v1.RuntimeService/Version
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.406963467Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ab616b3-852c-4d64-9d8d-0ed9cf739334 name=/runtime.v1.RuntimeService/Version
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.408453056Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95f1b784-fb90-421a-82cb-42ed7e845c53 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.408972554Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765855956408951387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95f1b784-fb90-421a-82cb-42ed7e845c53 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.409716064Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4047348-3d8c-4501-842f-83a851fb4b18 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.409785503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4047348-3d8c-4501-842f-83a851fb4b18 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.410021131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30cbfcf72c599e2f73f84aa02a68c130b8e7b215afd97bea124adf804a1a61cd,PodSandboxId:fa9452a9930b7fa54c0769f7dcd06a96965a9d8992337228ed778d536be4aa2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765855942253086054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tst4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5bc773a-8ef2-4f79-bdd3-ead643257601,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1659127a04c0e595791c31ebcd6c3cedcd9c75aa05195038dc70a120b78024d6,PodSandboxId:9c94e1cb0dd180d945fec246cfca9cddeada9362ad391b96e90bf3637b16b89d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765855942263923773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rcwxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4c343db-7dab-4de5-89f2-ce2687b6631f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb21282af131f6b326f61f602cb404d88d036dbaf91f58041b9d0cc59a457f7,PodSandboxId:dae9ca1923edaf49e343cba4e7db0bb11a35c290c3b22a27e42b7e493908a7be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765855938427364711,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582fb99923c3b4e606630b30dbd77848,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac2603a4581191c10d4b3e10f52b0b12af70b9f64c42741b7205147715c7078,PodSandboxId:18ab7bec90a901e061c388215a8dda583ecb32acea89125b131de26364383a3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:C
ONTAINER_RUNNING,CreatedAt:1765855938448790249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d448dfe87545fc587b352dd2eaa7a763,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c4b91c38409e3b515017c952463970ec783353ede06e6b8d23b2ad4d57fd9b,PodSandboxId:f2de72045bd453f040fc6752a3acecee68ede348807bf0988cb9d01cae15f88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765855938416880990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f172ccbfdd61b8b761a84f05e6d663b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eac74d80c5016151ab3e0030f99826d0bb335fdbaf666b409fc395d37c2c73f,PodSandboxId:7e479d126c3d4b0c4c9a19f1c82ac9acbe2fee3e6b99d562f65f8f7e28db08c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a
5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765855938402228366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d64ac4f35b628e0d630c2501f74195a7,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7,PodSandboxId:9c94e1cb0dd180d945fec246cfca9cddeada9362ad391b
96e90bf3637b16b89d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765855914547417482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rcwxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4c343db-7dab-4de5-89f2-ce2687b6631f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48,PodSandboxId:fa9452a9930b7fa54c0769f7dcd06a96965a9d8992337228ed778d536be4aa2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765855913538605352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tst4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5bc773a-8ef2-4f79-bdd3-ead643257601,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b867c1bdad08e7d4b11632415fa3ce077d2fbbc7d269b68165673c2c8ac4e0,PodSandboxId:dae9ca1923edaf49e343cba4e7db0bb11a35c290c3b22a27e42b7e493908a7be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765855913395194273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582fb99923c3b4e606630b30dbd77848,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1603771fc3bb957264f0e0f69af13f99d97b23a395e946185f6f8606e58076e3,PodSandboxId:7e479d126c3d4b0c4c9a19f1c82ac9acbe2fee3e6b99d562f65f8f7e28db08c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765855913432386960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d64ac4f35b628e0d630c2501f74195a7,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1943eab5540b2524451fd9149e5bdb48d529525992e4427f98907806a8120c,PodSandboxId:18ab7bec90a901e061c388215a8dda583ecb32acea89125b131de26364383a3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765855913425395652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d448dfe87545fc587b352dd2eaa7a763,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af7add7fe9690f182b65bb9686b7a53b05509c0da8b8a4bc1586130f494913d,PodSandboxId:f2de72045bd453f040fc6752a3acecee68ede348807bf0988cb9d01cae15f88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765855913412415764,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f172ccbfdd61b8b761a84f05e6d663b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4047348-3d8c-4501-842f-83a851fb4b18 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.449642754Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a644d8b2-04a3-4b1b-8c6b-1277c7e4afb3 name=/runtime.v1.RuntimeService/Version
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.449723586Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a644d8b2-04a3-4b1b-8c6b-1277c7e4afb3 name=/runtime.v1.RuntimeService/Version
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.450835956Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5d71cece-a86d-4abe-a09b-f57e8eb7ae02 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.451324118Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765855956451289140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5d71cece-a86d-4abe-a09b-f57e8eb7ae02 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.452120651Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=515cd23c-2dd8-4d92-9cb4-d0d3f8eedd66 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.452340689Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=515cd23c-2dd8-4d92-9cb4-d0d3f8eedd66 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.453194390Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30cbfcf72c599e2f73f84aa02a68c130b8e7b215afd97bea124adf804a1a61cd,PodSandboxId:fa9452a9930b7fa54c0769f7dcd06a96965a9d8992337228ed778d536be4aa2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765855942253086054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tst4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5bc773a-8ef2-4f79-bdd3-ead643257601,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1659127a04c0e595791c31ebcd6c3cedcd9c75aa05195038dc70a120b78024d6,PodSandboxId:9c94e1cb0dd180d945fec246cfca9cddeada9362ad391b96e90bf3637b16b89d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765855942263923773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rcwxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4c343db-7dab-4de5-89f2-ce2687b6631f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb21282af131f6b326f61f602cb404d88d036dbaf91f58041b9d0cc59a457f7,PodSandboxId:dae9ca1923edaf49e343cba4e7db0bb11a35c290c3b22a27e42b7e493908a7be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765855938427364711,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582fb99923c3b4e606630b30dbd77848,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac2603a4581191c10d4b3e10f52b0b12af70b9f64c42741b7205147715c7078,PodSandboxId:18ab7bec90a901e061c388215a8dda583ecb32acea89125b131de26364383a3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:C
ONTAINER_RUNNING,CreatedAt:1765855938448790249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d448dfe87545fc587b352dd2eaa7a763,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c4b91c38409e3b515017c952463970ec783353ede06e6b8d23b2ad4d57fd9b,PodSandboxId:f2de72045bd453f040fc6752a3acecee68ede348807bf0988cb9d01cae15f88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765855938416880990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f172ccbfdd61b8b761a84f05e6d663b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eac74d80c5016151ab3e0030f99826d0bb335fdbaf666b409fc395d37c2c73f,PodSandboxId:7e479d126c3d4b0c4c9a19f1c82ac9acbe2fee3e6b99d562f65f8f7e28db08c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a
5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765855938402228366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d64ac4f35b628e0d630c2501f74195a7,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7,PodSandboxId:9c94e1cb0dd180d945fec246cfca9cddeada9362ad391b
96e90bf3637b16b89d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765855914547417482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rcwxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4c343db-7dab-4de5-89f2-ce2687b6631f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48,PodSandboxId:fa9452a9930b7fa54c0769f7dcd06a96965a9d8992337228ed778d536be4aa2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765855913538605352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tst4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5bc773a-8ef2-4f79-bdd3-ead643257601,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b867c1bdad08e7d4b11632415fa3ce077d2fbbc7d269b68165673c2c8ac4e0,PodSandboxId:dae9ca1923edaf49e343cba4e7db0bb11a35c290c3b22a27e42b7e493908a7be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765855913395194273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582fb99923c3b4e606630b30dbd77848,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1603771fc3bb957264f0e0f69af13f99d97b23a395e946185f6f8606e58076e3,PodSandboxId:7e479d126c3d4b0c4c9a19f1c82ac9acbe2fee3e6b99d562f65f8f7e28db08c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765855913432386960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d64ac4f35b628e0d630c2501f74195a7,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1943eab5540b2524451fd9149e5bdb48d529525992e4427f98907806a8120c,PodSandboxId:18ab7bec90a901e061c388215a8dda583ecb32acea89125b131de26364383a3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765855913425395652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d448dfe87545fc587b352dd2eaa7a763,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af7add7fe9690f182b65bb9686b7a53b05509c0da8b8a4bc1586130f494913d,PodSandboxId:f2de72045bd453f040fc6752a3acecee68ede348807bf0988cb9d01cae15f88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765855913412415764,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f172ccbfdd61b8b761a84f05e6d663b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=515cd23c-2dd8-4d92-9cb4-d0d3f8eedd66 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.500850544Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cc3c193a-1296-4687-87a8-64af86629a39 name=/runtime.v1.RuntimeService/Version
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.500969035Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cc3c193a-1296-4687-87a8-64af86629a39 name=/runtime.v1.RuntimeService/Version
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.502416880Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43ca3c6b-8d95-4237-977f-0fe971591201 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.502881205Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765855956502843291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43ca3c6b-8d95-4237-977f-0fe971591201 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.504024510Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=adcf5ecd-d6e4-4e47-b453-da38070d0d3d name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.504168545Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=adcf5ecd-d6e4-4e47-b453-da38070d0d3d name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:36 pause-127368 crio[2811]: time="2025-12-16 03:32:36.504558237Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30cbfcf72c599e2f73f84aa02a68c130b8e7b215afd97bea124adf804a1a61cd,PodSandboxId:fa9452a9930b7fa54c0769f7dcd06a96965a9d8992337228ed778d536be4aa2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765855942253086054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tst4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5bc773a-8ef2-4f79-bdd3-ead643257601,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1659127a04c0e595791c31ebcd6c3cedcd9c75aa05195038dc70a120b78024d6,PodSandboxId:9c94e1cb0dd180d945fec246cfca9cddeada9362ad391b96e90bf3637b16b89d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765855942263923773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rcwxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4c343db-7dab-4de5-89f2-ce2687b6631f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb21282af131f6b326f61f602cb404d88d036dbaf91f58041b9d0cc59a457f7,PodSandboxId:dae9ca1923edaf49e343cba4e7db0bb11a35c290c3b22a27e42b7e493908a7be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765855938427364711,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582fb99923c3b4e606630b30dbd77848,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac2603a4581191c10d4b3e10f52b0b12af70b9f64c42741b7205147715c7078,PodSandboxId:18ab7bec90a901e061c388215a8dda583ecb32acea89125b131de26364383a3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:C
ONTAINER_RUNNING,CreatedAt:1765855938448790249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d448dfe87545fc587b352dd2eaa7a763,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c4b91c38409e3b515017c952463970ec783353ede06e6b8d23b2ad4d57fd9b,PodSandboxId:f2de72045bd453f040fc6752a3acecee68ede348807bf0988cb9d01cae15f88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765855938416880990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f172ccbfdd61b8b761a84f05e6d663b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eac74d80c5016151ab3e0030f99826d0bb335fdbaf666b409fc395d37c2c73f,PodSandboxId:7e479d126c3d4b0c4c9a19f1c82ac9acbe2fee3e6b99d562f65f8f7e28db08c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a
5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765855938402228366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d64ac4f35b628e0d630c2501f74195a7,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7,PodSandboxId:9c94e1cb0dd180d945fec246cfca9cddeada9362ad391b
96e90bf3637b16b89d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765855914547417482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rcwxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4c343db-7dab-4de5-89f2-ce2687b6631f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48,PodSandboxId:fa9452a9930b7fa54c0769f7dcd06a96965a9d8992337228ed778d536be4aa2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765855913538605352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tst4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5bc773a-8ef2-4f79-bdd3-ead643257601,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b867c1bdad08e7d4b11632415fa3ce077d2fbbc7d269b68165673c2c8ac4e0,PodSandboxId:dae9ca1923edaf49e343cba4e7db0bb11a35c290c3b22a27e42b7e493908a7be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765855913395194273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582fb99923c3b4e606630b30dbd77848,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1603771fc3bb957264f0e0f69af13f99d97b23a395e946185f6f8606e58076e3,PodSandboxId:7e479d126c3d4b0c4c9a19f1c82ac9acbe2fee3e6b99d562f65f8f7e28db08c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765855913432386960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d64ac4f35b628e0d630c2501f74195a7,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1943eab5540b2524451fd9149e5bdb48d529525992e4427f98907806a8120c,PodSandboxId:18ab7bec90a901e061c388215a8dda583ecb32acea89125b131de26364383a3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765855913425395652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d448dfe87545fc587b352dd2eaa7a763,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af7add7fe9690f182b65bb9686b7a53b05509c0da8b8a4bc1586130f494913d,PodSandboxId:f2de72045bd453f040fc6752a3acecee68ede348807bf0988cb9d01cae15f88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765855913412415764,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f172ccbfdd61b8b761a84f05e6d663b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=adcf5ecd-d6e4-4e47-b453-da38070d0d3d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	1659127a04c0e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   2                   9c94e1cb0dd18       coredns-66bc5c9577-rcwxg               kube-system
	30cbfcf72c599       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   14 seconds ago      Running             kube-proxy                2                   fa9452a9930b7       kube-proxy-6tst4                       kube-system
	eac2603a45811       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   18 seconds ago      Running             etcd                      2                   18ab7bec90a90       etcd-pause-127368                      kube-system
	bcb21282af131       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   18 seconds ago      Running             kube-scheduler            2                   dae9ca1923eda       kube-scheduler-pause-127368            kube-system
	a5c4b91c38409       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   18 seconds ago      Running             kube-controller-manager   2                   f2de72045bd45       kube-controller-manager-pause-127368   kube-system
	1eac74d80c501       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   18 seconds ago      Running             kube-apiserver            2                   7e479d126c3d4       kube-apiserver-pause-127368            kube-system
	c73fce5742a57       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   42 seconds ago      Exited              coredns                   1                   9c94e1cb0dd18       coredns-66bc5c9577-rcwxg               kube-system
	7886b3cb93db3       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   43 seconds ago      Exited              kube-proxy                1                   fa9452a9930b7       kube-proxy-6tst4                       kube-system
	1603771fc3bb9       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   43 seconds ago      Exited              kube-apiserver            1                   7e479d126c3d4       kube-apiserver-pause-127368            kube-system
	1e1943eab5540       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   43 seconds ago      Exited              etcd                      1                   18ab7bec90a90       etcd-pause-127368                      kube-system
	6af7add7fe969       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   43 seconds ago      Exited              kube-controller-manager   1                   f2de72045bd45       kube-controller-manager-pause-127368   kube-system
	b8b867c1bdad0       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   43 seconds ago      Exited              kube-scheduler            1                   dae9ca1923eda       kube-scheduler-pause-127368            kube-system
	
	
	==> coredns [1659127a04c0e595791c31ebcd6c3cedcd9c75aa05195038dc70a120b78024d6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40926 - 15620 "HINFO IN 9096102086732634136.7010297379036465662. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032944409s
	
	
	==> coredns [c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:44559 - 51558 "HINFO IN 3322695004666707743.1788114612334312608. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063784547s
	
	
	==> describe nodes <==
	Name:               pause-127368
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-127368
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=pause-127368
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_31_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:30:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-127368
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:32:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:32:21 +0000   Tue, 16 Dec 2025 03:30:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:32:21 +0000   Tue, 16 Dec 2025 03:30:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:32:21 +0000   Tue, 16 Dec 2025 03:30:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 03:32:21 +0000   Tue, 16 Dec 2025 03:31:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.23
	  Hostname:    pause-127368
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 bdd176aec0a44d47b426ef6399527a4a
	  System UUID:                bdd176ae-c0a4-4d47-b426-ef6399527a4a
	  Boot ID:                    49c938f2-a066-4ecd-abb5-79dd6b2937b0
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-rcwxg                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     91s
	  kube-system                 etcd-pause-127368                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         96s
	  kube-system                 kube-apiserver-pause-127368             250m (12%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-controller-manager-pause-127368    200m (10%)    0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-6tst4                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-scheduler-pause-127368             100m (5%)     0 (0%)      0 (0%)           0 (0%)         96s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 89s                kube-proxy       
	  Normal  Starting                 13s                kube-proxy       
	  Normal  Starting                 37s                kube-proxy       
	  Normal  Starting                 96s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  96s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  96s                kubelet          Node pause-127368 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                kubelet          Node pause-127368 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                kubelet          Node pause-127368 status is now: NodeHasSufficientPID
	  Normal  NodeReady                95s                kubelet          Node pause-127368 status is now: NodeReady
	  Normal  RegisteredNode           92s                node-controller  Node pause-127368 event: Registered Node pause-127368 in Controller
	  Normal  RegisteredNode           35s                node-controller  Node pause-127368 event: Registered Node pause-127368 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18s (x8 over 18s)  kubelet          Node pause-127368 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x8 over 18s)  kubelet          Node pause-127368 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x7 over 18s)  kubelet          Node pause-127368 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12s                node-controller  Node pause-127368 event: Registered Node pause-127368 in Controller
	
	
	==> dmesg <==
	[Dec16 03:30] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000069] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.013878] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.191906] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.086066] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.096140] kauditd_printk_skb: 102 callbacks suppressed
	[Dec16 03:31] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.494816] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.104932] kauditd_printk_skb: 225 callbacks suppressed
	[ +21.513283] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.462103] kauditd_printk_skb: 297 callbacks suppressed
	[Dec16 03:32] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.120737] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.249338] kauditd_printk_skb: 112 callbacks suppressed
	[  +7.938767] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [1e1943eab5540b2524451fd9149e5bdb48d529525992e4427f98907806a8120c] <==
	{"level":"warn","ts":"2025-12-16T03:31:57.410762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:31:57.423067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:31:57.448874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:31:57.470852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:31:57.490746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:31:57.515375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:31:57.581838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37386","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T03:32:15.435036Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-16T03:32:15.435109Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-127368","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.23:2380"],"advertise-client-urls":["https://192.168.83.23:2379"]}
	{"level":"error","ts":"2025-12-16T03:32:15.435197Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-16T03:32:15.435251Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-12-16T03:32:15.437107Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-16T03:32:15.437164Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-16T03:32:15.437171Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-16T03:32:15.437222Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.23:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-16T03:32:15.437229Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.23:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-16T03:32:15.437235Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.23:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-16T03:32:15.437267Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T03:32:15.437334Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"83122a6f182c046f","current-leader-member-id":"83122a6f182c046f"}
	{"level":"info","ts":"2025-12-16T03:32:15.437374Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-16T03:32:15.437403Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-16T03:32:15.440734Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.83.23:2380"}
	{"level":"error","ts":"2025-12-16T03:32:15.440809Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.23:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T03:32:15.440833Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.83.23:2380"}
	{"level":"info","ts":"2025-12-16T03:32:15.440839Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-127368","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.23:2380"],"advertise-client-urls":["https://192.168.83.23:2379"]}
	
	
	==> etcd [eac2603a4581191c10d4b3e10f52b0b12af70b9f64c42741b7205147715c7078] <==
	{"level":"warn","ts":"2025-12-16T03:32:20.213327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.250593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.267720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.278961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.298257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.309647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.321579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.341631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.350840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.360241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.374736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.391761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.399540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.425010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.441822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.470694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.474628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.489909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.500240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.511251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.529960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.568454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.579265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.586478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.638238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51888","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:32:36 up 2 min,  0 users,  load average: 1.22, 0.53, 0.20
	Linux pause-127368 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:48:01 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1603771fc3bb957264f0e0f69af13f99d97b23a395e946185f6f8606e58076e3] <==
	I1216 03:32:05.378910       1 storage_flowcontrol.go:172] APF bootstrap ensurer is exiting
	I1216 03:32:05.378921       1 cluster_authentication_trust_controller.go:482] Shutting down cluster_authentication_trust_controller controller
	I1216 03:32:05.378933       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I1216 03:32:05.378944       1 controller.go:132] Ending legacy_token_tracking_controller
	I1216 03:32:05.378948       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I1216 03:32:05.378956       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I1216 03:32:05.378968       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I1216 03:32:05.378975       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I1216 03:32:05.380119       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1216 03:32:05.380419       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1216 03:32:05.380550       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	I1216 03:32:05.380849       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1216 03:32:05.380883       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1216 03:32:05.380952       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1216 03:32:05.380997       1 secure_serving.go:259] Stopped listening on [::]:8443
	I1216 03:32:05.381008       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1216 03:32:05.381162       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1216 03:32:05.381230       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1216 03:32:05.381271       1 controller.go:157] Shutting down quota evaluator
	I1216 03:32:05.381313       1 controller.go:176] quota evaluator worker shutdown
	I1216 03:32:05.382193       1 repairip.go:246] Shutting down ipallocator-repair-controller
	I1216 03:32:05.382747       1 controller.go:176] quota evaluator worker shutdown
	I1216 03:32:05.382772       1 controller.go:176] quota evaluator worker shutdown
	I1216 03:32:05.382778       1 controller.go:176] quota evaluator worker shutdown
	I1216 03:32:05.382781       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-apiserver [1eac74d80c5016151ab3e0030f99826d0bb335fdbaf666b409fc395d37c2c73f] <==
	I1216 03:32:21.496257       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1216 03:32:21.496338       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 03:32:21.500415       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 03:32:21.504573       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1216 03:32:21.504699       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1216 03:32:21.507847       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 03:32:21.507935       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1216 03:32:21.508017       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1216 03:32:21.508056       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 03:32:21.510571       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 03:32:21.510648       1 aggregator.go:171] initial CRD sync complete...
	I1216 03:32:21.510683       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 03:32:21.510695       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 03:32:21.510700       1 cache.go:39] Caches are synced for autoregister controller
	I1216 03:32:21.561145       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:32:21.971948       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 03:32:22.330894       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1216 03:32:23.119016       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.83.23]
	I1216 03:32:23.120607       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 03:32:23.126180       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 03:32:23.726236       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 03:32:23.787183       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1216 03:32:23.828605       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:32:23.839411       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:32:31.900086       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6af7add7fe9690f182b65bb9686b7a53b05509c0da8b8a4bc1586130f494913d] <==
	I1216 03:32:01.869782       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 03:32:01.869867       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1216 03:32:01.869875       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 03:32:01.869934       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 03:32:01.869945       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 03:32:01.869952       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 03:32:01.872584       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1216 03:32:01.872660       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1216 03:32:01.872706       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-127368"
	I1216 03:32:01.872735       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1216 03:32:01.872739       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1216 03:32:01.876612       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 03:32:01.878005       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1216 03:32:01.880440       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1216 03:32:01.882966       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1216 03:32:01.885524       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1216 03:32:01.885574       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1216 03:32:01.885578       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 03:32:01.886869       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 03:32:01.887943       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1216 03:32:01.890358       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 03:32:01.891378       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 03:32:01.892328       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1216 03:32:01.894853       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1216 03:32:01.894918       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [a5c4b91c38409e3b515017c952463970ec783353ede06e6b8d23b2ad4d57fd9b] <==
	I1216 03:32:24.845231       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 03:32:24.845371       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1216 03:32:24.845384       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 03:32:24.852557       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1216 03:32:24.856180       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 03:32:24.863799       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 03:32:24.864310       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1216 03:32:24.873599       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 03:32:24.873631       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 03:32:24.873637       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 03:32:24.878693       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 03:32:24.880599       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1216 03:32:24.880721       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 03:32:24.881004       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1216 03:32:24.881872       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1216 03:32:24.882198       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1216 03:32:24.882765       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 03:32:24.885197       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1216 03:32:24.886156       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1216 03:32:24.886277       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1216 03:32:24.886369       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-127368"
	I1216 03:32:24.886574       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1216 03:32:24.889296       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 03:32:24.902960       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 03:32:24.927018       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [30cbfcf72c599e2f73f84aa02a68c130b8e7b215afd97bea124adf804a1a61cd] <==
	I1216 03:32:22.652039       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 03:32:22.757632       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 03:32:22.758206       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.23"]
	E1216 03:32:22.758323       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 03:32:22.831719       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1216 03:32:22.831796       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 03:32:22.831830       1 server_linux.go:132] "Using iptables Proxier"
	I1216 03:32:22.851245       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 03:32:22.852881       1 server.go:527] "Version info" version="v1.34.2"
	I1216 03:32:22.852895       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:32:22.863562       1 config.go:200] "Starting service config controller"
	I1216 03:32:22.863692       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 03:32:22.864027       1 config.go:106] "Starting endpoint slice config controller"
	I1216 03:32:22.864309       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 03:32:22.864553       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 03:32:22.864886       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 03:32:22.864633       1 config.go:309] "Starting node config controller"
	I1216 03:32:22.865397       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 03:32:22.865660       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 03:32:22.964789       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 03:32:22.965259       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 03:32:22.965267       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48] <==
	I1216 03:31:56.300542       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 03:31:58.602341       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 03:31:58.602442       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.23"]
	E1216 03:31:58.602724       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 03:31:58.794971       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1216 03:31:58.795238       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 03:31:58.795338       1 server_linux.go:132] "Using iptables Proxier"
	I1216 03:31:58.867179       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 03:31:58.879911       1 server.go:527] "Version info" version="v1.34.2"
	I1216 03:31:58.879955       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:31:58.926927       1 config.go:309] "Starting node config controller"
	I1216 03:31:58.927019       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 03:31:58.927049       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 03:31:58.928100       1 config.go:200] "Starting service config controller"
	I1216 03:31:58.928165       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 03:31:58.928212       1 config.go:106] "Starting endpoint slice config controller"
	I1216 03:31:58.928220       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 03:31:58.928238       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 03:31:58.928243       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 03:31:59.028937       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 03:31:59.029051       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 03:31:59.029151       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b8b867c1bdad08e7d4b11632415fa3ce077d2fbbc7d269b68165673c2c8ac4e0] <==
	I1216 03:31:57.242947       1 serving.go:386] Generated self-signed cert in-memory
	I1216 03:31:59.145401       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 03:31:59.145439       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:31:59.151185       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 03:31:59.151276       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1216 03:31:59.151286       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1216 03:31:59.151309       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 03:31:59.153957       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:31:59.153983       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:31:59.153998       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 03:31:59.154003       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 03:31:59.251809       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1216 03:31:59.254100       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 03:31:59.254238       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:32:15.710286       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1216 03:32:15.710355       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:32:15.710387       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 03:32:15.710406       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1216 03:32:15.710976       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1216 03:32:15.711024       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1216 03:32:15.711043       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1216 03:32:15.711070       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bcb21282af131f6b326f61f602cb404d88d036dbaf91f58041b9d0cc59a457f7] <==
	I1216 03:32:19.963173       1 serving.go:386] Generated self-signed cert in-memory
	I1216 03:32:22.619768       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 03:32:22.621634       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:32:22.650785       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 03:32:22.655790       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 03:32:22.656566       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:32:22.659264       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:32:22.656589       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 03:32:22.660323       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 03:32:22.655894       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1216 03:32:22.665707       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1216 03:32:22.760705       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 03:32:22.761530       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:32:22.765993       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 16 03:32:21 pause-127368 kubelet[3947]: E1216 03:32:21.234331    3947 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-127368\" not found" node="pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: E1216 03:32:21.234724    3947 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-127368\" not found" node="pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: E1216 03:32:21.235021    3947 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-127368\" not found" node="pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: E1216 03:32:21.260173    3947 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-127368\" not found" node="pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.424355    3947 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: E1216 03:32:21.553140    3947 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-127368\" already exists" pod="kube-system/kube-apiserver-pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.553435    3947 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: E1216 03:32:21.568706    3947 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-127368\" already exists" pod="kube-system/kube-controller-manager-pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.568746    3947 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: E1216 03:32:21.582317    3947 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-127368\" already exists" pod="kube-system/kube-scheduler-pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.582440    3947 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.591775    3947 kubelet_node_status.go:124] "Node was previously registered" node="pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.591886    3947 kubelet_node_status.go:78] "Successfully registered node" node="pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.591922    3947 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.594182    3947 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: E1216 03:32:21.608157    3947 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-127368\" already exists" pod="kube-system/etcd-pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.909568    3947 apiserver.go:52] "Watching apiserver"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.921657    3947 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.964235    3947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5bc773a-8ef2-4f79-bdd3-ead643257601-xtables-lock\") pod \"kube-proxy-6tst4\" (UID: \"c5bc773a-8ef2-4f79-bdd3-ead643257601\") " pod="kube-system/kube-proxy-6tst4"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.966744    3947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5bc773a-8ef2-4f79-bdd3-ead643257601-lib-modules\") pod \"kube-proxy-6tst4\" (UID: \"c5bc773a-8ef2-4f79-bdd3-ead643257601\") " pod="kube-system/kube-proxy-6tst4"
	Dec 16 03:32:22 pause-127368 kubelet[3947]: I1216 03:32:22.215982    3947 scope.go:117] "RemoveContainer" containerID="c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7"
	Dec 16 03:32:22 pause-127368 kubelet[3947]: I1216 03:32:22.216396    3947 scope.go:117] "RemoveContainer" containerID="7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48"
	Dec 16 03:32:28 pause-127368 kubelet[3947]: E1216 03:32:28.058447    3947 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765855948057766854 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 16 03:32:28 pause-127368 kubelet[3947]: E1216 03:32:28.058476    3947 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765855948057766854 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 16 03:32:31 pause-127368 kubelet[3947]: I1216 03:32:31.867781    3947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-127368 -n pause-127368
helpers_test.go:270: (dbg) Run:  kubectl --context pause-127368 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-127368 -n pause-127368
helpers_test.go:253: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-127368 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-127368 logs -n 25: (1.264034681s)
helpers_test.go:261: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-079027 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                        │ cilium-079027             │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │                     │
	│ ssh     │ -p cilium-079027 sudo cat /etc/containerd/config.toml                                                                                                                                                                   │ cilium-079027             │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │                     │
	│ ssh     │ -p cilium-079027 sudo containerd config dump                                                                                                                                                                            │ cilium-079027             │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │                     │
	│ ssh     │ -p cilium-079027 sudo systemctl status crio --all --full --no-pager                                                                                                                                                     │ cilium-079027             │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │                     │
	│ ssh     │ -p cilium-079027 sudo systemctl cat crio --no-pager                                                                                                                                                                     │ cilium-079027             │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │                     │
	│ ssh     │ -p cilium-079027 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                           │ cilium-079027             │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │                     │
	│ ssh     │ -p cilium-079027 sudo crio config                                                                                                                                                                                       │ cilium-079027             │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │                     │
	│ delete  │ -p cilium-079027                                                                                                                                                                                                        │ cilium-079027             │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │ 16 Dec 25 03:29 UTC │
	│ start   │ -p guest-064510 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                 │ guest-064510              │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │ 16 Dec 25 03:29 UTC │
	│ delete  │ -p force-systemd-env-050892                                                                                                                                                                                             │ force-systemd-env-050892  │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │ 16 Dec 25 03:29 UTC │
	│ start   │ -p cert-expiration-121062 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                    │ cert-expiration-121062    │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │ 16 Dec 25 03:30 UTC │
	│ delete  │ -p kubernetes-upgrade-352947                                                                                                                                                                                            │ kubernetes-upgrade-352947 │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │ 16 Dec 25 03:29 UTC │
	│ start   │ -p force-systemd-flag-103596 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                               │ force-systemd-flag-103596 │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │ 16 Dec 25 03:30 UTC │
	│ start   │ -p pause-127368 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-127368              │ jenkins │ v1.37.0 │ 16 Dec 25 03:29 UTC │ 16 Dec 25 03:31 UTC │
	│ ssh     │ force-systemd-flag-103596 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                    │ force-systemd-flag-103596 │ jenkins │ v1.37.0 │ 16 Dec 25 03:30 UTC │ 16 Dec 25 03:30 UTC │
	│ delete  │ -p force-systemd-flag-103596                                                                                                                                                                                            │ force-systemd-flag-103596 │ jenkins │ v1.37.0 │ 16 Dec 25 03:30 UTC │ 16 Dec 25 03:30 UTC │
	│ start   │ -p cert-options-972236 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-972236       │ jenkins │ v1.37.0 │ 16 Dec 25 03:30 UTC │ 16 Dec 25 03:31 UTC │
	│ ssh     │ cert-options-972236 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-972236       │ jenkins │ v1.37.0 │ 16 Dec 25 03:31 UTC │ 16 Dec 25 03:31 UTC │
	│ ssh     │ -p cert-options-972236 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-972236       │ jenkins │ v1.37.0 │ 16 Dec 25 03:31 UTC │ 16 Dec 25 03:31 UTC │
	│ delete  │ -p cert-options-972236                                                                                                                                                                                                  │ cert-options-972236       │ jenkins │ v1.37.0 │ 16 Dec 25 03:31 UTC │ 16 Dec 25 03:31 UTC │
	│ start   │ -p auto-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                                                                                   │ auto-079027               │ jenkins │ v1.37.0 │ 16 Dec 25 03:31 UTC │                     │
	│ start   │ -p pause-127368 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-127368              │ jenkins │ v1.37.0 │ 16 Dec 25 03:31 UTC │ 16 Dec 25 03:32 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile running-upgrade-418673 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ running-upgrade-418673    │ jenkins │ v1.37.0 │ 16 Dec 25 03:31 UTC │                     │
	│ delete  │ -p running-upgrade-418673                                                                                                                                                                                               │ running-upgrade-418673    │ jenkins │ v1.37.0 │ 16 Dec 25 03:32 UTC │ 16 Dec 25 03:32 UTC │
	│ start   │ -p kindnet-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                                                                                  │ kindnet-079027            │ jenkins │ v1.37.0 │ 16 Dec 25 03:32 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 03:32:01
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 03:32:01.335023   43455 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:32:01.335276   43455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:01.335284   43455 out.go:374] Setting ErrFile to fd 2...
	I1216 03:32:01.335288   43455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:32:01.335565   43455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 03:32:01.336828   43455 out.go:368] Setting JSON to false
	I1216 03:32:01.337738   43455 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4466,"bootTime":1765851455,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:32:01.337791   43455 start.go:143] virtualization: kvm guest
	I1216 03:32:01.339938   43455 out.go:179] * [kindnet-079027] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:32:01.341275   43455 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:32:01.341271   43455 notify.go:221] Checking for updates...
	I1216 03:32:01.343546   43455 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:32:01.344752   43455 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 03:32:01.345963   43455 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 03:32:01.347185   43455 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:32:01.348360   43455 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:32:01.350264   43455 config.go:182] Loaded profile config "auto-079027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:32:01.350378   43455 config.go:182] Loaded profile config "cert-expiration-121062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:32:01.350482   43455 config.go:182] Loaded profile config "guest-064510": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1216 03:32:01.350698   43455 config.go:182] Loaded profile config "pause-127368": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:32:01.350849   43455 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:32:01.399437   43455 out.go:179] * Using the kvm2 driver based on user configuration
	I1216 03:32:01.400662   43455 start.go:309] selected driver: kvm2
	I1216 03:32:01.400700   43455 start.go:927] validating driver "kvm2" against <nil>
	I1216 03:32:01.400714   43455 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:32:01.401703   43455 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 03:32:01.402041   43455 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:32:01.402072   43455 cni.go:84] Creating CNI manager for "kindnet"
	I1216 03:32:01.402080   43455 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 03:32:01.402118   43455 start.go:353] cluster config:
	{Name:kindnet-079027 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:kindnet-079027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:32:01.402230   43455 iso.go:125] acquiring lock: {Name:mk055aa36b1051bc664b283a8a6fb2af4db94c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 03:32:01.403464   43455 out.go:179] * Starting "kindnet-079027" primary control-plane node in "kindnet-079027" cluster
	I1216 03:32:01.404395   43455 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:32:01.404442   43455 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 03:32:01.404456   43455 cache.go:65] Caching tarball of preloaded images
	I1216 03:32:01.404534   43455 preload.go:238] Found /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1216 03:32:01.404547   43455 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1216 03:32:01.404652   43455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/config.json ...
	I1216 03:32:01.404677   43455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/config.json: {Name:mk29468448342ae4c959d22444e4b1b6618e5c5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:01.404829   43455 start.go:360] acquireMachinesLock for kindnet-079027: {Name:mk6501572e7fc03699ef9d932e34f995d8ad6f98 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1216 03:32:01.404870   43455 start.go:364] duration metric: took 25.209µs to acquireMachinesLock for "kindnet-079027"
	I1216 03:32:01.404892   43455 start.go:93] Provisioning new machine with config: &{Name:kindnet-079027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.2 ClusterName:kindnet-079027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:32:01.404995   43455 start.go:125] createHost starting for "" (driver="kvm2")
	I1216 03:32:03.315984   43066 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 03:32:03.316052   43066 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:32:03.316154   43066 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:32:03.316291   43066 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:32:03.316447   43066 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:32:03.316564   43066 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 03:32:03.317849   43066 out.go:252]   - Generating certificates and keys ...
	I1216 03:32:03.317963   43066 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:32:03.318059   43066 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:32:03.318154   43066 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:32:03.318270   43066 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:32:03.318365   43066 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:32:03.318458   43066 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:32:03.318536   43066 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:32:03.318723   43066 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-079027 localhost] and IPs [192.168.50.67 127.0.0.1 ::1]
	I1216 03:32:03.318815   43066 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:32:03.318951   43066 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-079027 localhost] and IPs [192.168.50.67 127.0.0.1 ::1]
	I1216 03:32:03.319013   43066 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:32:03.319070   43066 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:32:03.319109   43066 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:32:03.319157   43066 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:32:03.319205   43066 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:32:03.319257   43066 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:32:03.319322   43066 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:32:03.319420   43066 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:32:03.319505   43066 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:32:03.319645   43066 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:32:03.319736   43066 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:32:03.321148   43066 out.go:252]   - Booting up control plane ...
	I1216 03:32:03.321238   43066 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:32:03.321303   43066 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:32:03.321360   43066 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:32:03.321442   43066 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:32:03.321522   43066 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:32:03.321621   43066 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:32:03.321713   43066 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:32:03.321799   43066 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:32:03.322012   43066 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:32:03.322190   43066 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 03:32:03.322300   43066 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002334799s
	I1216 03:32:03.322427   43066 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 03:32:03.322534   43066 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.50.67:8443/livez
	I1216 03:32:03.322656   43066 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 03:32:03.322763   43066 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 03:32:03.322886   43066 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.889705921s
	I1216 03:32:03.323024   43066 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.367765911s
	I1216 03:32:03.323129   43066 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.005516381s
	I1216 03:32:03.323265   43066 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:32:03.323414   43066 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:32:03.323514   43066 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:32:03.323779   43066 kubeadm.go:319] [mark-control-plane] Marking the node auto-079027 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:32:03.323876   43066 kubeadm.go:319] [bootstrap-token] Using token: vhq760.ftqshaumwpqec4fg
	I1216 03:32:03.325135   43066 out.go:252]   - Configuring RBAC rules ...
	I1216 03:32:03.325277   43066 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 03:32:03.325404   43066 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 03:32:03.325553   43066 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 03:32:03.325701   43066 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 03:32:03.325822   43066 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 03:32:03.325944   43066 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 03:32:03.326084   43066 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 03:32:03.326123   43066 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1216 03:32:03.326202   43066 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1216 03:32:03.326213   43066 kubeadm.go:319] 
	I1216 03:32:03.326302   43066 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1216 03:32:03.326314   43066 kubeadm.go:319] 
	I1216 03:32:03.326421   43066 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1216 03:32:03.326429   43066 kubeadm.go:319] 
	I1216 03:32:03.326465   43066 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1216 03:32:03.326551   43066 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 03:32:03.326643   43066 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 03:32:03.326659   43066 kubeadm.go:319] 
	I1216 03:32:03.326740   43066 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1216 03:32:03.326751   43066 kubeadm.go:319] 
	I1216 03:32:03.326830   43066 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 03:32:03.326841   43066 kubeadm.go:319] 
	I1216 03:32:03.326911   43066 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1216 03:32:03.327040   43066 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 03:32:03.327151   43066 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 03:32:03.327161   43066 kubeadm.go:319] 
	I1216 03:32:03.327246   43066 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 03:32:03.327350   43066 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1216 03:32:03.327359   43066 kubeadm.go:319] 
	I1216 03:32:03.327442   43066 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token vhq760.ftqshaumwpqec4fg \
	I1216 03:32:03.327591   43066 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6d3bac17af9f836812b78bb65fe3149db071d191150485ad31b907e98cbc14f1 \
	I1216 03:32:03.327631   43066 kubeadm.go:319] 	--control-plane 
	I1216 03:32:03.327646   43066 kubeadm.go:319] 
	I1216 03:32:03.327762   43066 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1216 03:32:03.327774   43066 kubeadm.go:319] 
	I1216 03:32:03.327896   43066 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token vhq760.ftqshaumwpqec4fg \
	I1216 03:32:03.328072   43066 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6d3bac17af9f836812b78bb65fe3149db071d191150485ad31b907e98cbc14f1 
	I1216 03:32:03.328090   43066 cni.go:84] Creating CNI manager for ""
	I1216 03:32:03.328099   43066 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 03:32:03.329541   43066 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 03:32:01.406423   43455 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1216 03:32:01.406605   43455 start.go:159] libmachine.API.Create for "kindnet-079027" (driver="kvm2")
	I1216 03:32:01.406636   43455 client.go:173] LocalClient.Create starting
	I1216 03:32:01.406706   43455 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem
	I1216 03:32:01.406741   43455 main.go:143] libmachine: Decoding PEM data...
	I1216 03:32:01.406765   43455 main.go:143] libmachine: Parsing certificate...
	I1216 03:32:01.406827   43455 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem
	I1216 03:32:01.406853   43455 main.go:143] libmachine: Decoding PEM data...
	I1216 03:32:01.406870   43455 main.go:143] libmachine: Parsing certificate...
	I1216 03:32:01.407150   43455 main.go:143] libmachine: creating domain...
	I1216 03:32:01.407165   43455 main.go:143] libmachine: creating network...
	I1216 03:32:01.408644   43455 main.go:143] libmachine: found existing default network
	I1216 03:32:01.408887   43455 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1216 03:32:01.409717   43455 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:70:3c:ab} reservation:<nil>}
	I1216 03:32:01.410693   43455 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f2:2b:23} reservation:<nil>}
	I1216 03:32:01.411196   43455 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:47:35:32} reservation:<nil>}
	I1216 03:32:01.412028   43455 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bc0ed0}
	I1216 03:32:01.412120   43455 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-kindnet-079027</name>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1216 03:32:01.417342   43455 main.go:143] libmachine: creating private network mk-kindnet-079027 192.168.72.0/24...
	I1216 03:32:01.490946   43455 main.go:143] libmachine: private network mk-kindnet-079027 192.168.72.0/24 created
	I1216 03:32:01.491299   43455 main.go:143] libmachine: <network>
	  <name>mk-kindnet-079027</name>
	  <uuid>8fe7437f-3676-48c0-bfd5-b979f9a3095b</uuid>
	  <bridge name='virbr4' stp='on' delay='0'/>
	  <mac address='52:54:00:b9:22:68'/>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1216 03:32:01.491329   43455 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027 ...
	I1216 03:32:01.491350   43455 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/22158-5036/.minikube/cache/iso/amd64/minikube-v1.37.0-1765836331-22158-amd64.iso
	I1216 03:32:01.491359   43455 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 03:32:01.491434   43455 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/22158-5036/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22158-5036/.minikube/cache/iso/amd64/minikube-v1.37.0-1765836331-22158-amd64.iso...
	I1216 03:32:01.748015   43455 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/id_rsa...
	I1216 03:32:01.864709   43455 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/kindnet-079027.rawdisk...
	I1216 03:32:01.864776   43455 main.go:143] libmachine: Writing magic tar header
	I1216 03:32:01.864811   43455 main.go:143] libmachine: Writing SSH key tar header
	I1216 03:32:01.864968   43455 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027 ...
	I1216 03:32:01.865054   43455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027
	I1216 03:32:01.865094   43455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027 (perms=drwx------)
	I1216 03:32:01.865115   43455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22158-5036/.minikube/machines
	I1216 03:32:01.865132   43455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22158-5036/.minikube/machines (perms=drwxr-xr-x)
	I1216 03:32:01.865150   43455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 03:32:01.865181   43455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22158-5036/.minikube (perms=drwxr-xr-x)
	I1216 03:32:01.865200   43455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22158-5036
	I1216 03:32:01.865214   43455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22158-5036 (perms=drwxrwxr-x)
	I1216 03:32:01.865227   43455 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1216 03:32:01.865238   43455 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1216 03:32:01.865251   43455 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1216 03:32:01.865269   43455 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1216 03:32:01.865287   43455 main.go:143] libmachine: checking permissions on dir: /home
	I1216 03:32:01.865300   43455 main.go:143] libmachine: skipping /home - not owner
	I1216 03:32:01.865310   43455 main.go:143] libmachine: defining domain...
	I1216 03:32:01.867056   43455 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>kindnet-079027</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/kindnet-079027.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-kindnet-079027'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1216 03:32:01.872164   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:d9:8f:3b in network default
	I1216 03:32:01.872878   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:01.872897   43455 main.go:143] libmachine: starting domain...
	I1216 03:32:01.872902   43455 main.go:143] libmachine: ensuring networks are active...
	I1216 03:32:01.873689   43455 main.go:143] libmachine: Ensuring network default is active
	I1216 03:32:01.874327   43455 main.go:143] libmachine: Ensuring network mk-kindnet-079027 is active
	I1216 03:32:01.875341   43455 main.go:143] libmachine: getting domain XML...
	I1216 03:32:01.876826   43455 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>kindnet-079027</name>
	  <uuid>1d2bf1bb-66d0-4601-995b-378d47476890</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/kindnet-079027.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:0f:e2:b0'/>
	      <source network='mk-kindnet-079027'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:d9:8f:3b'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1216 03:32:03.270886   43455 main.go:143] libmachine: waiting for domain to start...
	I1216 03:32:03.272261   43455 main.go:143] libmachine: domain is now running
	I1216 03:32:03.272281   43455 main.go:143] libmachine: waiting for IP...
	I1216 03:32:03.273327   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:03.274030   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:03.274044   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:03.274361   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:03.274402   43455 retry.go:31] will retry after 224.720932ms: waiting for domain to come up
	I1216 03:32:03.501103   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:03.501956   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:03.501976   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:03.502456   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:03.502495   43455 retry.go:31] will retry after 327.206572ms: waiting for domain to come up
	I1216 03:32:03.831106   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:03.831903   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:03.831932   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:03.832266   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:03.832300   43455 retry.go:31] will retry after 386.458842ms: waiting for domain to come up
	I1216 03:32:04.219723   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:04.220375   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:04.220395   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:04.220713   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:04.220750   43455 retry.go:31] will retry after 398.825546ms: waiting for domain to come up
	I1216 03:32:04.621120   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:04.621970   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:04.621989   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:04.622346   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:04.622394   43455 retry.go:31] will retry after 708.753951ms: waiting for domain to come up
	I1216 03:32:05.333424   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:05.334192   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:05.334213   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:05.334579   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:05.334621   43455 retry.go:31] will retry after 707.904265ms: waiting for domain to come up
	I1216 03:32:06.044388   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:06.044964   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:06.044988   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:06.045398   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:06.045439   43455 retry.go:31] will retry after 1.00904731s: waiting for domain to come up
	I1216 03:32:03.330591   43066 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 03:32:03.344643   43066 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 03:32:03.368661   43066 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:32:03.368806   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:03.368813   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-079027 minikube.k8s.io/updated_at=2025_12_16T03_32_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e minikube.k8s.io/name=auto-079027 minikube.k8s.io/primary=true
	I1216 03:32:03.415136   43066 ops.go:34] apiserver oom_adj: -16
	I1216 03:32:03.512124   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:04.013140   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:04.512547   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:05.012990   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:05.512794   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:06.013050   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:06.512419   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:07.012942   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:07.512308   43066 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 03:32:07.620329   43066 kubeadm.go:1114] duration metric: took 4.251590465s to wait for elevateKubeSystemPrivileges
	I1216 03:32:07.620377   43066 kubeadm.go:403] duration metric: took 17.790594517s to StartCluster
	I1216 03:32:07.620401   43066 settings.go:142] acquiring lock: {Name:mk546ecdfe1860ae68a814905b53e6453298b4fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:07.620491   43066 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 03:32:07.621836   43066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/kubeconfig: {Name:mk6832d71ef0ad581fa898dceefc2fcc2fd665b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:07.622075   43066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 03:32:07.622079   43066 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.50.67 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:32:07.622176   43066 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:32:07.622264   43066 addons.go:70] Setting storage-provisioner=true in profile "auto-079027"
	I1216 03:32:07.622285   43066 addons.go:239] Setting addon storage-provisioner=true in "auto-079027"
	I1216 03:32:07.622293   43066 config.go:182] Loaded profile config "auto-079027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:32:07.622318   43066 addons.go:70] Setting default-storageclass=true in profile "auto-079027"
	I1216 03:32:07.622349   43066 host.go:66] Checking if "auto-079027" exists ...
	I1216 03:32:07.622353   43066 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-079027"
	I1216 03:32:07.623374   43066 out.go:179] * Verifying Kubernetes components...
	I1216 03:32:07.624712   43066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:32:07.624769   43066 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 03:32:07.625955   43066 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:32:07.625972   43066 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 03:32:07.626093   43066 addons.go:239] Setting addon default-storageclass=true in "auto-079027"
	I1216 03:32:07.626129   43066 host.go:66] Checking if "auto-079027" exists ...
	I1216 03:32:07.627939   43066 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 03:32:07.627958   43066 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 03:32:07.629306   43066 main.go:143] libmachine: domain auto-079027 has defined MAC address 52:54:00:0b:f1:e9 in network mk-auto-079027
	I1216 03:32:07.629760   43066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:f1:e9", ip: ""} in network mk-auto-079027: {Iface:virbr2 ExpiryTime:2025-12-16 04:31:41 +0000 UTC Type:0 Mac:52:54:00:0b:f1:e9 Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:auto-079027 Clientid:01:52:54:00:0b:f1:e9}
	I1216 03:32:07.629794   43066 main.go:143] libmachine: domain auto-079027 has defined IP address 192.168.50.67 and MAC address 52:54:00:0b:f1:e9 in network mk-auto-079027
	I1216 03:32:07.629991   43066 sshutil.go:53] new ssh client: &{IP:192.168.50.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/auto-079027/id_rsa Username:docker}
	I1216 03:32:07.630672   43066 main.go:143] libmachine: domain auto-079027 has defined MAC address 52:54:00:0b:f1:e9 in network mk-auto-079027
	I1216 03:32:07.631112   43066 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0b:f1:e9", ip: ""} in network mk-auto-079027: {Iface:virbr2 ExpiryTime:2025-12-16 04:31:41 +0000 UTC Type:0 Mac:52:54:00:0b:f1:e9 Iaid: IPaddr:192.168.50.67 Prefix:24 Hostname:auto-079027 Clientid:01:52:54:00:0b:f1:e9}
	I1216 03:32:07.631134   43066 main.go:143] libmachine: domain auto-079027 has defined IP address 192.168.50.67 and MAC address 52:54:00:0b:f1:e9 in network mk-auto-079027
	I1216 03:32:07.631317   43066 sshutil.go:53] new ssh client: &{IP:192.168.50.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/auto-079027/id_rsa Username:docker}
	I1216 03:32:07.833455   43066 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 03:32:07.944624   43066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:32:08.026027   43066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 03:32:08.237891   43066 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 03:32:08.526725   43066 start.go:977] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1216 03:32:08.528052   43066 node_ready.go:35] waiting up to 15m0s for node "auto-079027" to be "Ready" ...
	I1216 03:32:08.553608   43066 node_ready.go:49] node "auto-079027" is "Ready"
	I1216 03:32:08.553643   43066 node_ready.go:38] duration metric: took 25.542756ms for node "auto-079027" to be "Ready" ...
	I1216 03:32:08.553659   43066 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:32:08.553720   43066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:32:08.931894   43066 api_server.go:72] duration metric: took 1.309778359s to wait for apiserver process to appear ...
	I1216 03:32:08.931939   43066 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:32:08.931957   43066 api_server.go:253] Checking apiserver healthz at https://192.168.50.67:8443/healthz ...
	I1216 03:32:08.948970   43066 api_server.go:279] https://192.168.50.67:8443/healthz returned 200:
	ok
	I1216 03:32:08.951488   43066 api_server.go:141] control plane version: v1.34.2
	I1216 03:32:08.951521   43066 api_server.go:131] duration metric: took 19.572937ms to wait for apiserver health ...
	I1216 03:32:08.951534   43066 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:32:08.974234   43066 system_pods.go:59] 8 kube-system pods found
	I1216 03:32:08.974278   43066 system_pods.go:61] "coredns-66bc5c9577-gqnr7" [bc517995-7ee3-4135-92b3-d5b4465a51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:08.974293   43066 system_pods.go:61] "coredns-66bc5c9577-tf8wg" [4a564d18-d8e4-4a87-aad0-6ce2e6d936e2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:08.974305   43066 system_pods.go:61] "etcd-auto-079027" [823d98fd-8545-4d70-9d38-5d96c9a2c02d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:32:08.974314   43066 system_pods.go:61] "kube-apiserver-auto-079027" [09971dd6-36ae-4e94-89f5-69e70ab7bb20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:32:08.974320   43066 system_pods.go:61] "kube-controller-manager-auto-079027" [d8e22d56-0a02-448f-a82d-df8d8b1c143c] Running
	I1216 03:32:08.974328   43066 system_pods.go:61] "kube-proxy-z27dv" [67e5f5e9-bf74-45fe-b739-c9a0d1a645f2] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 03:32:08.974335   43066 system_pods.go:61] "kube-scheduler-auto-079027" [4721e71a-d439-4362-bab7-0a00dae17433] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:32:08.974354   43066 system_pods.go:61] "storage-provisioner" [dee19316-4e78-49d3-934f-0235f1cac50d] Pending
	I1216 03:32:08.974367   43066 system_pods.go:74] duration metric: took 22.825585ms to wait for pod list to return data ...
	I1216 03:32:08.974374   43066 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:32:08.985944   43066 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1216 03:32:07.055478   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:07.056241   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:07.056259   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:07.056617   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:07.056647   43455 retry.go:31] will retry after 910.76854ms: waiting for domain to come up
	I1216 03:32:07.969280   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:07.970004   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:07.970023   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:07.970460   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:07.970497   43455 retry.go:31] will retry after 1.364536663s: waiting for domain to come up
	I1216 03:32:09.336440   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:09.337287   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:09.337309   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:09.337740   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:09.337783   43455 retry.go:31] will retry after 1.638483619s: waiting for domain to come up
	I1216 03:32:10.977318   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:10.978137   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:10.978155   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:10.978635   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:10.978670   43455 retry.go:31] will retry after 1.809483931s: waiting for domain to come up
	I1216 03:32:08.986281   43066 default_sa.go:45] found service account: "default"
	I1216 03:32:08.986304   43066 default_sa.go:55] duration metric: took 11.922733ms for default service account to be created ...
	I1216 03:32:08.986319   43066 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:32:08.987122   43066 addons.go:530] duration metric: took 1.36495172s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1216 03:32:08.993395   43066 system_pods.go:86] 8 kube-system pods found
	I1216 03:32:08.993428   43066 system_pods.go:89] "coredns-66bc5c9577-gqnr7" [bc517995-7ee3-4135-92b3-d5b4465a51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:08.993439   43066 system_pods.go:89] "coredns-66bc5c9577-tf8wg" [4a564d18-d8e4-4a87-aad0-6ce2e6d936e2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:08.993449   43066 system_pods.go:89] "etcd-auto-079027" [823d98fd-8545-4d70-9d38-5d96c9a2c02d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:32:08.993462   43066 system_pods.go:89] "kube-apiserver-auto-079027" [09971dd6-36ae-4e94-89f5-69e70ab7bb20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:32:08.993473   43066 system_pods.go:89] "kube-controller-manager-auto-079027" [d8e22d56-0a02-448f-a82d-df8d8b1c143c] Running
	I1216 03:32:08.993485   43066 system_pods.go:89] "kube-proxy-z27dv" [67e5f5e9-bf74-45fe-b739-c9a0d1a645f2] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 03:32:08.993497   43066 system_pods.go:89] "kube-scheduler-auto-079027" [4721e71a-d439-4362-bab7-0a00dae17433] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:32:08.993534   43066 system_pods.go:89] "storage-provisioner" [dee19316-4e78-49d3-934f-0235f1cac50d] Pending
	I1216 03:32:08.993579   43066 retry.go:31] will retry after 265.607367ms: missing components: kube-dns, kube-proxy
	I1216 03:32:09.032323   43066 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-079027" context rescaled to 1 replicas
	I1216 03:32:09.268648   43066 system_pods.go:86] 8 kube-system pods found
	I1216 03:32:09.268691   43066 system_pods.go:89] "coredns-66bc5c9577-gqnr7" [bc517995-7ee3-4135-92b3-d5b4465a51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:09.268702   43066 system_pods.go:89] "coredns-66bc5c9577-tf8wg" [4a564d18-d8e4-4a87-aad0-6ce2e6d936e2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:09.268712   43066 system_pods.go:89] "etcd-auto-079027" [823d98fd-8545-4d70-9d38-5d96c9a2c02d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:32:09.268721   43066 system_pods.go:89] "kube-apiserver-auto-079027" [09971dd6-36ae-4e94-89f5-69e70ab7bb20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:32:09.268729   43066 system_pods.go:89] "kube-controller-manager-auto-079027" [d8e22d56-0a02-448f-a82d-df8d8b1c143c] Running
	I1216 03:32:09.268746   43066 system_pods.go:89] "kube-proxy-z27dv" [67e5f5e9-bf74-45fe-b739-c9a0d1a645f2] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 03:32:09.268758   43066 system_pods.go:89] "kube-scheduler-auto-079027" [4721e71a-d439-4362-bab7-0a00dae17433] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:32:09.268768   43066 system_pods.go:89] "storage-provisioner" [dee19316-4e78-49d3-934f-0235f1cac50d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:32:09.268794   43066 retry.go:31] will retry after 280.2749ms: missing components: kube-dns, kube-proxy
	I1216 03:32:09.556471   43066 system_pods.go:86] 8 kube-system pods found
	I1216 03:32:09.556515   43066 system_pods.go:89] "coredns-66bc5c9577-gqnr7" [bc517995-7ee3-4135-92b3-d5b4465a51fe] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:09.556526   43066 system_pods.go:89] "coredns-66bc5c9577-tf8wg" [4a564d18-d8e4-4a87-aad0-6ce2e6d936e2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:09.556535   43066 system_pods.go:89] "etcd-auto-079027" [823d98fd-8545-4d70-9d38-5d96c9a2c02d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:32:09.556544   43066 system_pods.go:89] "kube-apiserver-auto-079027" [09971dd6-36ae-4e94-89f5-69e70ab7bb20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:32:09.556555   43066 system_pods.go:89] "kube-controller-manager-auto-079027" [d8e22d56-0a02-448f-a82d-df8d8b1c143c] Running
	I1216 03:32:09.556563   43066 system_pods.go:89] "kube-proxy-z27dv" [67e5f5e9-bf74-45fe-b739-c9a0d1a645f2] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1216 03:32:09.556574   43066 system_pods.go:89] "kube-scheduler-auto-079027" [4721e71a-d439-4362-bab7-0a00dae17433] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:32:09.556582   43066 system_pods.go:89] "storage-provisioner" [dee19316-4e78-49d3-934f-0235f1cac50d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1216 03:32:09.556604   43066 retry.go:31] will retry after 450.685399ms: missing components: kube-dns, kube-proxy
	I1216 03:32:10.013349   43066 system_pods.go:86] 8 kube-system pods found
	I1216 03:32:10.013382   43066 system_pods.go:89] "coredns-66bc5c9577-gqnr7" [bc517995-7ee3-4135-92b3-d5b4465a51fe] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:10.013394   43066 system_pods.go:89] "coredns-66bc5c9577-tf8wg" [4a564d18-d8e4-4a87-aad0-6ce2e6d936e2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:10.013404   43066 system_pods.go:89] "etcd-auto-079027" [823d98fd-8545-4d70-9d38-5d96c9a2c02d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:32:10.013412   43066 system_pods.go:89] "kube-apiserver-auto-079027" [09971dd6-36ae-4e94-89f5-69e70ab7bb20] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:32:10.013418   43066 system_pods.go:89] "kube-controller-manager-auto-079027" [d8e22d56-0a02-448f-a82d-df8d8b1c143c] Running
	I1216 03:32:10.013425   43066 system_pods.go:89] "kube-proxy-z27dv" [67e5f5e9-bf74-45fe-b739-c9a0d1a645f2] Running
	I1216 03:32:10.013432   43066 system_pods.go:89] "kube-scheduler-auto-079027" [4721e71a-d439-4362-bab7-0a00dae17433] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:32:10.013437   43066 system_pods.go:89] "storage-provisioner" [dee19316-4e78-49d3-934f-0235f1cac50d] Running
	I1216 03:32:10.013458   43066 system_pods.go:126] duration metric: took 1.02712819s to wait for k8s-apps to be running ...
	I1216 03:32:10.013471   43066 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:32:10.013523   43066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:32:10.036461   43066 system_svc.go:56] duration metric: took 22.982493ms WaitForService to wait for kubelet
	I1216 03:32:10.036488   43066 kubeadm.go:587] duration metric: took 2.414375877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:32:10.036510   43066 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:32:10.040949   43066 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 03:32:10.040970   43066 node_conditions.go:123] node cpu capacity is 2
	I1216 03:32:10.040985   43066 node_conditions.go:105] duration metric: took 4.468358ms to run NodePressure ...
	I1216 03:32:10.040997   43066 start.go:242] waiting for startup goroutines ...
	I1216 03:32:10.041007   43066 start.go:247] waiting for cluster config update ...
	I1216 03:32:10.041020   43066 start.go:256] writing updated cluster config ...
	I1216 03:32:10.041279   43066 ssh_runner.go:195] Run: rm -f paused
	I1216 03:32:10.046324   43066 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:32:10.050365   43066 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gqnr7" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:12.790481   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:12.791225   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:12.791242   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:12.791559   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:12.791595   43455 retry.go:31] will retry after 2.685854796s: waiting for domain to come up
	I1216 03:32:15.479865   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:15.480463   43455 main.go:143] libmachine: no network interface addresses found for domain kindnet-079027 (source=lease)
	I1216 03:32:15.480482   43455 main.go:143] libmachine: trying to list again with source=arp
	I1216 03:32:15.480832   43455 main.go:143] libmachine: unable to find current IP address of domain kindnet-079027 in network mk-kindnet-079027 (interfaces detected: [])
	I1216 03:32:15.480867   43455 retry.go:31] will retry after 3.260389682s: waiting for domain to come up
	W1216 03:32:12.058163   43066 pod_ready.go:104] pod "coredns-66bc5c9577-gqnr7" is not "Ready", error: <nil>
	W1216 03:32:14.557817   43066 pod_ready.go:104] pod "coredns-66bc5c9577-gqnr7" is not "Ready", error: <nil>
	I1216 03:32:15.842752   43267 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7 7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48 1603771fc3bb957264f0e0f69af13f99d97b23a395e946185f6f8606e58076e3 1e1943eab5540b2524451fd9149e5bdb48d529525992e4427f98907806a8120c 6af7add7fe9690f182b65bb9686b7a53b05509c0da8b8a4bc1586130f494913d b8b867c1bdad08e7d4b11632415fa3ce077d2fbbc7d269b68165673c2c8ac4e0 516464b244c242de261bac3b8cbd3e0bbc298412c90d2700a0cf9253021faa00 a8bc982e97375733c6a6884402ec35e3c9d903a482fa1c0cec72a4d3d95e8461 2e96f0cb1410c8109bf609900229a88bc8162f92f8318a2e7cbf083b31cd0050 5625f27f367a7d7555860919ccfc373315df2bc1a1c3689aed6a359f22d5b62d 25cafb4681eab4cf7f0278530b5be09e38e3155ff5120fbadabb938d0b14882e 4a2bb8ba97dd0b3e5c3aa3b73fbbffd8d773e5fdd2227b6986d6e3c38cea3f16: (20.285511125s)
	W1216 03:32:15.842831   43267 kubeadm.go:649] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7 7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48 1603771fc3bb957264f0e0f69af13f99d97b23a395e946185f6f8606e58076e3 1e1943eab5540b2524451fd9149e5bdb48d529525992e4427f98907806a8120c 6af7add7fe9690f182b65bb9686b7a53b05509c0da8b8a4bc1586130f494913d b8b867c1bdad08e7d4b11632415fa3ce077d2fbbc7d269b68165673c2c8ac4e0 516464b244c242de261bac3b8cbd3e0bbc298412c90d2700a0cf9253021faa00 a8bc982e97375733c6a6884402ec35e3c9d903a482fa1c0cec72a4d3d95e8461 2e96f0cb1410c8109bf609900229a88bc8162f92f8318a2e7cbf083b31cd0050 5625f27f367a7d7555860919ccfc373315df2bc1a1c3689aed6a359f22d5b62d 25cafb4681eab4cf7f0278530b5be09e38e3155ff5120fbadabb938d0b14882e 4a2bb8ba97dd0b3e5c3aa3b73fbbffd8d773e5fdd2227b6986d6e3c38cea3f16: Process exited with status 1
	stdout:
	c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7
	7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48
	1603771fc3bb957264f0e0f69af13f99d97b23a395e946185f6f8606e58076e3
	1e1943eab5540b2524451fd9149e5bdb48d529525992e4427f98907806a8120c
	6af7add7fe9690f182b65bb9686b7a53b05509c0da8b8a4bc1586130f494913d
	b8b867c1bdad08e7d4b11632415fa3ce077d2fbbc7d269b68165673c2c8ac4e0
	
	stderr:
	E1216 03:32:15.836790    3648 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"516464b244c242de261bac3b8cbd3e0bbc298412c90d2700a0cf9253021faa00\": container with ID starting with 516464b244c242de261bac3b8cbd3e0bbc298412c90d2700a0cf9253021faa00 not found: ID does not exist" containerID="516464b244c242de261bac3b8cbd3e0bbc298412c90d2700a0cf9253021faa00"
	time="2025-12-16T03:32:15Z" level=fatal msg="stopping the container \"516464b244c242de261bac3b8cbd3e0bbc298412c90d2700a0cf9253021faa00\": rpc error: code = NotFound desc = could not find container \"516464b244c242de261bac3b8cbd3e0bbc298412c90d2700a0cf9253021faa00\": container with ID starting with 516464b244c242de261bac3b8cbd3e0bbc298412c90d2700a0cf9253021faa00 not found: ID does not exist"
	I1216 03:32:15.842898   43267 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1216 03:32:15.873520   43267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:32:15.885146   43267 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec 16 03:30 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5641 Dec 16 03:30 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1953 Dec 16 03:31 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5585 Dec 16 03:30 /etc/kubernetes/scheduler.conf
	
	I1216 03:32:15.885212   43267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:32:15.896375   43267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:32:15.906340   43267 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:32:15.906402   43267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:32:15.920974   43267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:32:15.932246   43267 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:32:15.932299   43267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:32:15.943220   43267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:32:15.954079   43267 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:32:15.954124   43267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:32:15.965404   43267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:32:15.976500   43267 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:32:16.029236   43267 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:32:17.547254   43267 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.517980774s)
	I1216 03:32:17.547370   43267 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:32:17.809942   43267 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:32:17.863496   43267 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:32:17.960424   43267 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:32:17.960513   43267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:32:18.460632   43267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:32:18.961506   43267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:32:18.998030   43267 api_server.go:72] duration metric: took 1.037618832s to wait for apiserver process to appear ...
	I1216 03:32:18.998055   43267 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:32:18.998077   43267 api_server.go:253] Checking apiserver healthz at https://192.168.83.23:8443/healthz ...
	I1216 03:32:18.742617   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:18.743573   43455 main.go:143] libmachine: domain kindnet-079027 has current primary IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:18.743603   43455 main.go:143] libmachine: found domain IP: 192.168.72.85
	I1216 03:32:18.743614   43455 main.go:143] libmachine: reserving static IP address...
	I1216 03:32:18.744127   43455 main.go:143] libmachine: unable to find host DHCP lease matching {name: "kindnet-079027", mac: "52:54:00:0f:e2:b0", ip: "192.168.72.85"} in network mk-kindnet-079027
	I1216 03:32:18.975520   43455 main.go:143] libmachine: reserved static IP address 192.168.72.85 for domain kindnet-079027
	I1216 03:32:18.975548   43455 main.go:143] libmachine: waiting for SSH...
	I1216 03:32:18.975557   43455 main.go:143] libmachine: Getting to WaitForSSH function...
	I1216 03:32:18.979205   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:18.979711   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:18.979748   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:18.980128   43455 main.go:143] libmachine: Using SSH client type: native
	I1216 03:32:18.980461   43455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.85 22 <nil> <nil>}
	I1216 03:32:18.980476   43455 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1216 03:32:19.107825   43455 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:32:19.108200   43455 main.go:143] libmachine: domain creation complete
	I1216 03:32:19.110070   43455 machine.go:94] provisionDockerMachine start ...
	I1216 03:32:19.112758   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.113296   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:19.113327   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.113519   43455 main.go:143] libmachine: Using SSH client type: native
	I1216 03:32:19.113839   43455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.85 22 <nil> <nil>}
	I1216 03:32:19.113855   43455 main.go:143] libmachine: About to run SSH command:
	hostname
	I1216 03:32:19.230473   43455 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1216 03:32:19.230504   43455 buildroot.go:166] provisioning hostname "kindnet-079027"
	I1216 03:32:19.233886   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.234368   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:19.234391   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.234570   43455 main.go:143] libmachine: Using SSH client type: native
	I1216 03:32:19.234814   43455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.85 22 <nil> <nil>}
	I1216 03:32:19.234835   43455 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-079027 && echo "kindnet-079027" | sudo tee /etc/hostname
	I1216 03:32:19.367556   43455 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-079027
	
	I1216 03:32:19.370722   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.371412   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:19.371446   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.371642   43455 main.go:143] libmachine: Using SSH client type: native
	I1216 03:32:19.371940   43455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.85 22 <nil> <nil>}
	I1216 03:32:19.371967   43455 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-079027' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-079027/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-079027' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 03:32:19.496996   43455 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1216 03:32:19.497027   43455 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22158-5036/.minikube CaCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22158-5036/.minikube}
	I1216 03:32:19.497077   43455 buildroot.go:174] setting up certificates
	I1216 03:32:19.497090   43455 provision.go:84] configureAuth start
	I1216 03:32:19.500614   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.501180   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:19.501219   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.504096   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.504566   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:19.504593   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.504751   43455 provision.go:143] copyHostCerts
	I1216 03:32:19.504828   43455 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5036/.minikube/ca.pem, removing ...
	I1216 03:32:19.504853   43455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5036/.minikube/ca.pem
	I1216 03:32:19.504940   43455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/ca.pem (1078 bytes)
	I1216 03:32:19.505075   43455 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5036/.minikube/cert.pem, removing ...
	I1216 03:32:19.505082   43455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5036/.minikube/cert.pem
	I1216 03:32:19.505123   43455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/cert.pem (1123 bytes)
	I1216 03:32:19.505193   43455 exec_runner.go:144] found /home/jenkins/minikube-integration/22158-5036/.minikube/key.pem, removing ...
	I1216 03:32:19.505199   43455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22158-5036/.minikube/key.pem
	I1216 03:32:19.505230   43455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22158-5036/.minikube/key.pem (1679 bytes)
	I1216 03:32:19.505335   43455 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem org=jenkins.kindnet-079027 san=[127.0.0.1 192.168.72.85 kindnet-079027 localhost minikube]
	I1216 03:32:19.604575   43455 provision.go:177] copyRemoteCerts
	I1216 03:32:19.604648   43455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 03:32:19.607914   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.608410   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:19.608448   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.608622   43455 sshutil.go:53] new ssh client: &{IP:192.168.72.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/id_rsa Username:docker}
	I1216 03:32:19.699107   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 03:32:19.727701   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1216 03:32:19.755673   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 03:32:19.787747   43455 provision.go:87] duration metric: took 290.636286ms to configureAuth
	I1216 03:32:19.787778   43455 buildroot.go:189] setting minikube options for container-runtime
	I1216 03:32:19.788022   43455 config.go:182] Loaded profile config "kindnet-079027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:32:19.791641   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.792132   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:19.792169   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:19.792361   43455 main.go:143] libmachine: Using SSH client type: native
	I1216 03:32:19.792650   43455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.85 22 <nil> <nil>}
	I1216 03:32:19.792677   43455 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 03:32:20.090463   43455 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 03:32:20.090490   43455 machine.go:97] duration metric: took 980.39954ms to provisionDockerMachine
	I1216 03:32:20.090500   43455 client.go:176] duration metric: took 18.683858332s to LocalClient.Create
	I1216 03:32:20.090518   43455 start.go:167] duration metric: took 18.683913531s to libmachine.API.Create "kindnet-079027"
	I1216 03:32:20.090526   43455 start.go:293] postStartSetup for "kindnet-079027" (driver="kvm2")
	I1216 03:32:20.090537   43455 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 03:32:20.090605   43455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 03:32:20.094103   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.094620   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:20.094653   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.094826   43455 sshutil.go:53] new ssh client: &{IP:192.168.72.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/id_rsa Username:docker}
	I1216 03:32:20.187787   43455 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 03:32:20.193116   43455 info.go:137] Remote host: Buildroot 2025.02
	I1216 03:32:20.193155   43455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5036/.minikube/addons for local assets ...
	I1216 03:32:20.193240   43455 filesync.go:126] Scanning /home/jenkins/minikube-integration/22158-5036/.minikube/files for local assets ...
	I1216 03:32:20.193317   43455 filesync.go:149] local asset: /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/ssl/certs/89742.pem -> 89742.pem in /etc/ssl/certs
	I1216 03:32:20.193414   43455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1216 03:32:20.205740   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/ssl/certs/89742.pem --> /etc/ssl/certs/89742.pem (1708 bytes)
	I1216 03:32:20.242317   43455 start.go:296] duration metric: took 151.777203ms for postStartSetup
	I1216 03:32:20.245949   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.246397   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:20.246427   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.246658   43455 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/config.json ...
	I1216 03:32:20.246917   43455 start.go:128] duration metric: took 18.84190913s to createHost
	I1216 03:32:20.249571   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.250014   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:20.250044   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.250272   43455 main.go:143] libmachine: Using SSH client type: native
	I1216 03:32:20.250506   43455 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 192.168.72.85 22 <nil> <nil>}
	I1216 03:32:20.250519   43455 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1216 03:32:20.363378   43455 main.go:143] libmachine: SSH cmd err, output: <nil>: 1765855940.319446779
	
	I1216 03:32:20.363406   43455 fix.go:216] guest clock: 1765855940.319446779
	I1216 03:32:20.363417   43455 fix.go:229] Guest: 2025-12-16 03:32:20.319446779 +0000 UTC Remote: 2025-12-16 03:32:20.246959246 +0000 UTC m=+18.971854705 (delta=72.487533ms)
	I1216 03:32:20.363438   43455 fix.go:200] guest clock delta is within tolerance: 72.487533ms
	I1216 03:32:20.363445   43455 start.go:83] releasing machines lock for "kindnet-079027", held for 18.958567245s
	I1216 03:32:20.367215   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.367673   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:20.367721   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.368284   43455 ssh_runner.go:195] Run: cat /version.json
	I1216 03:32:20.368427   43455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 03:32:20.371233   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.371451   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.371681   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:20.371711   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.371872   43455 sshutil.go:53] new ssh client: &{IP:192.168.72.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/id_rsa Username:docker}
	I1216 03:32:20.371890   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:20.371914   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:20.372140   43455 sshutil.go:53] new ssh client: &{IP:192.168.72.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/kindnet-079027/id_rsa Username:docker}
	I1216 03:32:20.458434   43455 ssh_runner.go:195] Run: systemctl --version
	I1216 03:32:20.496263   43455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 03:32:20.659125   43455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1216 03:32:20.667988   43455 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1216 03:32:20.668091   43455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 03:32:20.693406   43455 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1216 03:32:20.693430   43455 start.go:496] detecting cgroup driver to use...
	I1216 03:32:20.693490   43455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 03:32:20.718895   43455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 03:32:20.740447   43455 docker.go:218] disabling cri-docker service (if available) ...
	I1216 03:32:20.740523   43455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 03:32:20.759340   43455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 03:32:20.775521   43455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 03:32:20.923067   43455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 03:32:21.139033   43455 docker.go:234] disabling docker service ...
	I1216 03:32:21.139090   43455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 03:32:21.159159   43455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 03:32:21.174409   43455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	W1216 03:32:17.055695   43066 pod_ready.go:104] pod "coredns-66bc5c9577-gqnr7" is not "Ready", error: <nil>
	W1216 03:32:19.058037   43066 pod_ready.go:104] pod "coredns-66bc5c9577-gqnr7" is not "Ready", error: <nil>
	I1216 03:32:20.053865   43066 pod_ready.go:99] pod "coredns-66bc5c9577-gqnr7" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-gqnr7" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-gqnr7" not found
	I1216 03:32:20.053898   43066 pod_ready.go:86] duration metric: took 10.003512376s for pod "coredns-66bc5c9577-gqnr7" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:20.053912   43066 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tf8wg" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:21.339533   43455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 03:32:21.497663   43455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 03:32:21.517067   43455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 03:32:21.540339   43455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1216 03:32:21.540404   43455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:32:21.553626   43455 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 03:32:21.553686   43455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:32:21.566876   43455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:32:21.584663   43455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:32:21.598076   43455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 03:32:21.612749   43455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:32:21.624974   43455 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:32:21.647637   43455 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 03:32:21.665040   43455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 03:32:21.677511   43455 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1216 03:32:21.677573   43455 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1216 03:32:21.698377   43455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 03:32:21.709296   43455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:32:21.892311   43455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 03:32:22.005148   43455 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 03:32:22.005212   43455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 03:32:22.010549   43455 start.go:564] Will wait 60s for crictl version
	I1216 03:32:22.010647   43455 ssh_runner.go:195] Run: which crictl
	I1216 03:32:22.014801   43455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 03:32:22.062052   43455 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1216 03:32:22.062126   43455 ssh_runner.go:195] Run: crio --version
	I1216 03:32:22.099763   43455 ssh_runner.go:195] Run: crio --version
	I1216 03:32:22.135429   43455 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.29.1 ...
	I1216 03:32:21.356991   43267 api_server.go:279] https://192.168.83.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 03:32:21.357020   43267 api_server.go:103] status: https://192.168.83.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 03:32:21.357037   43267 api_server.go:253] Checking apiserver healthz at https://192.168.83.23:8443/healthz ...
	I1216 03:32:21.429878   43267 api_server.go:279] https://192.168.83.23:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1216 03:32:21.429909   43267 api_server.go:103] status: https://192.168.83.23:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1216 03:32:21.499130   43267 api_server.go:253] Checking apiserver healthz at https://192.168.83.23:8443/healthz ...
	I1216 03:32:21.510682   43267 api_server.go:279] https://192.168.83.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:32:21.510723   43267 api_server.go:103] status: https://192.168.83.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:32:21.998207   43267 api_server.go:253] Checking apiserver healthz at https://192.168.83.23:8443/healthz ...
	I1216 03:32:22.006379   43267 api_server.go:279] https://192.168.83.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:32:22.006413   43267 api_server.go:103] status: https://192.168.83.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:32:22.499072   43267 api_server.go:253] Checking apiserver healthz at https://192.168.83.23:8443/healthz ...
	I1216 03:32:22.524538   43267 api_server.go:279] https://192.168.83.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:32:22.524596   43267 api_server.go:103] status: https://192.168.83.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:32:22.999070   43267 api_server.go:253] Checking apiserver healthz at https://192.168.83.23:8443/healthz ...
	I1216 03:32:23.008079   43267 api_server.go:279] https://192.168.83.23:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1216 03:32:23.008108   43267 api_server.go:103] status: https://192.168.83.23:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1216 03:32:23.498369   43267 api_server.go:253] Checking apiserver healthz at https://192.168.83.23:8443/healthz ...
	I1216 03:32:23.503271   43267 api_server.go:279] https://192.168.83.23:8443/healthz returned 200:
	ok
	I1216 03:32:23.516104   43267 api_server.go:141] control plane version: v1.34.2
	I1216 03:32:23.516126   43267 api_server.go:131] duration metric: took 4.518063625s to wait for apiserver health ...
	I1216 03:32:23.516177   43267 cni.go:84] Creating CNI manager for ""
	I1216 03:32:23.516185   43267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 03:32:23.518245   43267 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1216 03:32:23.523078   43267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1216 03:32:23.548919   43267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1216 03:32:23.587680   43267 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:32:23.597672   43267 system_pods.go:59] 6 kube-system pods found
	I1216 03:32:23.597725   43267 system_pods.go:61] "coredns-66bc5c9577-rcwxg" [b4c343db-7dab-4de5-89f2-ce2687b6631f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:23.597738   43267 system_pods.go:61] "etcd-pause-127368" [e387d448-2b77-40c5-a65b-335fb7902fd5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:32:23.597760   43267 system_pods.go:61] "kube-apiserver-pause-127368" [50ced11f-9adb-413d-a3f9-02a8e4e1e331] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:32:23.597775   43267 system_pods.go:61] "kube-controller-manager-pause-127368" [879a4960-022e-4682-87bd-30d9240d52ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 03:32:23.597785   43267 system_pods.go:61] "kube-proxy-6tst4" [c5bc773a-8ef2-4f79-bdd3-ead643257601] Running
	I1216 03:32:23.597797   43267 system_pods.go:61] "kube-scheduler-pause-127368" [a9aa83b7-fd6e-4ff0-b3d3-d8d9b8111355] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:32:23.597807   43267 system_pods.go:74] duration metric: took 10.106234ms to wait for pod list to return data ...
	I1216 03:32:23.597826   43267 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:32:23.606449   43267 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 03:32:23.606483   43267 node_conditions.go:123] node cpu capacity is 2
	I1216 03:32:23.606499   43267 node_conditions.go:105] duration metric: took 8.666246ms to run NodePressure ...
	I1216 03:32:23.606559   43267 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1216 03:32:23.887586   43267 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1216 03:32:23.892341   43267 kubeadm.go:744] kubelet initialised
	I1216 03:32:23.892367   43267 kubeadm.go:745] duration metric: took 4.755865ms waiting for restarted kubelet to initialise ...
	I1216 03:32:23.892386   43267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 03:32:23.916831   43267 ops.go:34] apiserver oom_adj: -16
	I1216 03:32:23.916857   43267 kubeadm.go:602] duration metric: took 28.448824281s to restartPrimaryControlPlane
	I1216 03:32:23.916877   43267 kubeadm.go:403] duration metric: took 28.607885622s to StartCluster
	I1216 03:32:23.916897   43267 settings.go:142] acquiring lock: {Name:mk546ecdfe1860ae68a814905b53e6453298b4fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:23.917007   43267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 03:32:23.918534   43267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/kubeconfig: {Name:mk6832d71ef0ad581fa898dceefc2fcc2fd665b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:23.918796   43267 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.23 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 03:32:23.918948   43267 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1216 03:32:23.919093   43267 config.go:182] Loaded profile config "pause-127368": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:32:23.920505   43267 out.go:179] * Enabled addons: 
	I1216 03:32:23.920507   43267 out.go:179] * Verifying Kubernetes components...
	I1216 03:32:23.921643   43267 addons.go:530] duration metric: took 2.721553ms for enable addons: enabled=[]
	I1216 03:32:23.921674   43267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:32:24.153042   43267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:32:24.178150   43267 node_ready.go:35] waiting up to 6m0s for node "pause-127368" to be "Ready" ...
	I1216 03:32:24.182356   43267 node_ready.go:49] node "pause-127368" is "Ready"
	I1216 03:32:24.182381   43267 node_ready.go:38] duration metric: took 4.196221ms for node "pause-127368" to be "Ready" ...
	I1216 03:32:24.182396   43267 api_server.go:52] waiting for apiserver process to appear ...
	I1216 03:32:24.182451   43267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:32:24.205367   43267 api_server.go:72] duration metric: took 286.528322ms to wait for apiserver process to appear ...
	I1216 03:32:24.205402   43267 api_server.go:88] waiting for apiserver healthz status ...
	I1216 03:32:24.205427   43267 api_server.go:253] Checking apiserver healthz at https://192.168.83.23:8443/healthz ...
	I1216 03:32:24.212509   43267 api_server.go:279] https://192.168.83.23:8443/healthz returned 200:
	ok
	I1216 03:32:24.213670   43267 api_server.go:141] control plane version: v1.34.2
	I1216 03:32:24.213700   43267 api_server.go:131] duration metric: took 8.289431ms to wait for apiserver health ...
	I1216 03:32:24.213713   43267 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 03:32:24.218182   43267 system_pods.go:59] 6 kube-system pods found
	I1216 03:32:24.218212   43267 system_pods.go:61] "coredns-66bc5c9577-rcwxg" [b4c343db-7dab-4de5-89f2-ce2687b6631f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:24.218224   43267 system_pods.go:61] "etcd-pause-127368" [e387d448-2b77-40c5-a65b-335fb7902fd5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:32:24.218233   43267 system_pods.go:61] "kube-apiserver-pause-127368" [50ced11f-9adb-413d-a3f9-02a8e4e1e331] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:32:24.218242   43267 system_pods.go:61] "kube-controller-manager-pause-127368" [879a4960-022e-4682-87bd-30d9240d52ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 03:32:24.218250   43267 system_pods.go:61] "kube-proxy-6tst4" [c5bc773a-8ef2-4f79-bdd3-ead643257601] Running
	I1216 03:32:24.218257   43267 system_pods.go:61] "kube-scheduler-pause-127368" [a9aa83b7-fd6e-4ff0-b3d3-d8d9b8111355] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:32:24.218265   43267 system_pods.go:74] duration metric: took 4.54413ms to wait for pod list to return data ...
	I1216 03:32:24.218279   43267 default_sa.go:34] waiting for default service account to be created ...
	I1216 03:32:24.220906   43267 default_sa.go:45] found service account: "default"
	I1216 03:32:24.220947   43267 default_sa.go:55] duration metric: took 2.660134ms for default service account to be created ...
	I1216 03:32:24.220961   43267 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 03:32:24.223981   43267 system_pods.go:86] 6 kube-system pods found
	I1216 03:32:24.224005   43267 system_pods.go:89] "coredns-66bc5c9577-rcwxg" [b4c343db-7dab-4de5-89f2-ce2687b6631f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1216 03:32:24.224020   43267 system_pods.go:89] "etcd-pause-127368" [e387d448-2b77-40c5-a65b-335fb7902fd5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1216 03:32:24.224030   43267 system_pods.go:89] "kube-apiserver-pause-127368" [50ced11f-9adb-413d-a3f9-02a8e4e1e331] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1216 03:32:24.224040   43267 system_pods.go:89] "kube-controller-manager-pause-127368" [879a4960-022e-4682-87bd-30d9240d52ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1216 03:32:24.224046   43267 system_pods.go:89] "kube-proxy-6tst4" [c5bc773a-8ef2-4f79-bdd3-ead643257601] Running
	I1216 03:32:24.224059   43267 system_pods.go:89] "kube-scheduler-pause-127368" [a9aa83b7-fd6e-4ff0-b3d3-d8d9b8111355] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1216 03:32:24.224069   43267 system_pods.go:126] duration metric: took 3.097079ms to wait for k8s-apps to be running ...
	I1216 03:32:24.224079   43267 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 03:32:24.224129   43267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:32:24.245028   43267 system_svc.go:56] duration metric: took 20.938745ms WaitForService to wait for kubelet
	I1216 03:32:24.245058   43267 kubeadm.go:587] duration metric: took 326.227152ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 03:32:24.245079   43267 node_conditions.go:102] verifying NodePressure condition ...
	I1216 03:32:24.248307   43267 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1216 03:32:24.248334   43267 node_conditions.go:123] node cpu capacity is 2
	I1216 03:32:24.248350   43267 node_conditions.go:105] duration metric: took 3.264349ms to run NodePressure ...
	I1216 03:32:24.248366   43267 start.go:242] waiting for startup goroutines ...
	I1216 03:32:24.248377   43267 start.go:247] waiting for cluster config update ...
	I1216 03:32:24.248388   43267 start.go:256] writing updated cluster config ...
	I1216 03:32:24.248803   43267 ssh_runner.go:195] Run: rm -f paused
	I1216 03:32:24.255589   43267 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:32:24.256694   43267 kapi.go:59] client config for pause-127368: &rest.Config{Host:"https://192.168.83.23:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22158-5036/.minikube/profiles/pause-127368/client.crt", KeyFile:"/home/jenkins/minikube-integration/22158-5036/.minikube/profiles/pause-127368/client.key", CAFile:"/home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28171a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1216 03:32:24.261712   43267 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rcwxg" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:22.139456   43455 main.go:143] libmachine: domain kindnet-079027 has defined MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:22.139894   43455 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:0f:e2:b0", ip: ""} in network mk-kindnet-079027: {Iface:virbr4 ExpiryTime:2025-12-16 04:32:16 +0000 UTC Type:0 Mac:52:54:00:0f:e2:b0 Iaid: IPaddr:192.168.72.85 Prefix:24 Hostname:kindnet-079027 Clientid:01:52:54:00:0f:e2:b0}
	I1216 03:32:22.139949   43455 main.go:143] libmachine: domain kindnet-079027 has defined IP address 192.168.72.85 and MAC address 52:54:00:0f:e2:b0 in network mk-kindnet-079027
	I1216 03:32:22.140137   43455 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1216 03:32:22.144464   43455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:32:22.160624   43455 kubeadm.go:884] updating cluster {Name:kindnet-079027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.2 ClusterName:kindnet-079027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.85 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 03:32:22.160723   43455 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 03:32:22.160774   43455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:32:22.194514   43455 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.2". assuming images are not preloaded.
	I1216 03:32:22.194573   43455 ssh_runner.go:195] Run: which lz4
	I1216 03:32:22.198588   43455 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1216 03:32:22.203357   43455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1216 03:32:22.203394   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (340306595 bytes)
	I1216 03:32:23.506673   43455 crio.go:462] duration metric: took 1.308112075s to copy over tarball
	I1216 03:32:23.506737   43455 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1216 03:32:25.096947   43455 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.590165303s)
	I1216 03:32:25.096980   43455 crio.go:469] duration metric: took 1.590286184s to extract the tarball
	I1216 03:32:25.096987   43455 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1216 03:32:25.133489   43455 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 03:32:25.171322   43455 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 03:32:25.171354   43455 cache_images.go:86] Images are preloaded, skipping loading
	I1216 03:32:25.171362   43455 kubeadm.go:935] updating node { 192.168.72.85 8443 v1.34.2 crio true true} ...
	I1216 03:32:25.171457   43455 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-079027 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.85
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:kindnet-079027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1216 03:32:25.171529   43455 ssh_runner.go:195] Run: crio config
	I1216 03:32:25.217771   43455 cni.go:84] Creating CNI manager for "kindnet"
	I1216 03:32:25.217799   43455 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1216 03:32:25.217826   43455 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.85 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-079027 NodeName:kindnet-079027 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.85"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.85 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 03:32:25.218010   43455 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.85
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-079027"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.85"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.85"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 03:32:25.218091   43455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1216 03:32:25.230209   43455 binaries.go:51] Found k8s binaries, skipping transfer
	I1216 03:32:25.230285   43455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 03:32:25.241975   43455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1216 03:32:25.261775   43455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 03:32:25.282649   43455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1216 03:32:25.304175   43455 ssh_runner.go:195] Run: grep 192.168.72.85	control-plane.minikube.internal$ /etc/hosts
	I1216 03:32:25.308033   43455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.85	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 03:32:25.321957   43455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 03:32:25.468287   43455 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 03:32:25.501615   43455 certs.go:69] Setting up /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027 for IP: 192.168.72.85
	I1216 03:32:25.501642   43455 certs.go:195] generating shared ca certs ...
	I1216 03:32:25.501662   43455 certs.go:227] acquiring lock for ca certs: {Name:mk77e952ddad6d1f2b7d1d07b6d50cdef35b56ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:25.501874   43455 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22158-5036/.minikube/ca.key
	I1216 03:32:25.501957   43455 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.key
	I1216 03:32:25.501976   43455 certs.go:257] generating profile certs ...
	I1216 03:32:25.502052   43455 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.key
	I1216 03:32:25.502081   43455 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt with IP's: []
	I1216 03:32:25.698062   43455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt ...
	I1216 03:32:25.698089   43455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: {Name:mka2d54a423cae6c2ff9c307c3d6506f036e4266 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:25.698278   43455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.key ...
	I1216 03:32:25.698292   43455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.key: {Name:mkace4fcdebb26f91a01a6b40dc1b1edc405d7e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:25.698410   43455 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.key.c1f626c8
	I1216 03:32:25.698427   43455 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.crt.c1f626c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.85]
	I1216 03:32:25.744638   43455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.crt.c1f626c8 ...
	I1216 03:32:25.744660   43455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.crt.c1f626c8: {Name:mka6295fd5b0ff9bc346f24a0f09e16fb82be421 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:25.744820   43455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.key.c1f626c8 ...
	I1216 03:32:25.744836   43455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.key.c1f626c8: {Name:mk6a434ca45d7e7bc6a8b0625ecf2d911b7304c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:25.744946   43455 certs.go:382] copying /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.crt.c1f626c8 -> /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.crt
	I1216 03:32:25.745022   43455 certs.go:386] copying /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.key.c1f626c8 -> /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.key
	I1216 03:32:25.745080   43455 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/proxy-client.key
	I1216 03:32:25.745109   43455 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/proxy-client.crt with IP's: []
	I1216 03:32:25.817328   43455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/proxy-client.crt ...
	I1216 03:32:25.817348   43455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/proxy-client.crt: {Name:mkc62020bad3565b7bd4310e95b12e3102eb51f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:25.817513   43455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/proxy-client.key ...
	I1216 03:32:25.817527   43455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/proxy-client.key: {Name:mk8c09ff3034191c5e136db519004cb87d0fc0e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 03:32:25.817731   43455 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/8974.pem (1338 bytes)
	W1216 03:32:25.817770   43455 certs.go:480] ignoring /home/jenkins/minikube-integration/22158-5036/.minikube/certs/8974_empty.pem, impossibly tiny 0 bytes
	I1216 03:32:25.817780   43455 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 03:32:25.817802   43455 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/ca.pem (1078 bytes)
	I1216 03:32:25.817832   43455 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/cert.pem (1123 bytes)
	I1216 03:32:25.817858   43455 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/certs/key.pem (1679 bytes)
	I1216 03:32:25.817896   43455 certs.go:484] found cert: /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/ssl/certs/89742.pem (1708 bytes)
	I1216 03:32:25.818405   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 03:32:25.849955   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 03:32:25.880348   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 03:32:25.909433   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1216 03:32:25.937850   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 03:32:25.964544   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 03:32:25.991908   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 03:32:26.020772   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 03:32:26.049169   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/ssl/certs/89742.pem --> /usr/share/ca-certificates/89742.pem (1708 bytes)
	I1216 03:32:26.079312   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 03:32:26.107261   43455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22158-5036/.minikube/certs/8974.pem --> /usr/share/ca-certificates/8974.pem (1338 bytes)
	I1216 03:32:26.134964   43455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 03:32:26.153192   43455 ssh_runner.go:195] Run: openssl version
	I1216 03:32:26.158940   43455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:32:26.171367   43455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1216 03:32:26.183729   43455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:32:26.188837   43455 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:32:26.188888   43455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 03:32:26.195846   43455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1216 03:32:26.207621   43455 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1216 03:32:26.219316   43455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8974.pem
	I1216 03:32:26.230053   43455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8974.pem /etc/ssl/certs/8974.pem
	I1216 03:32:26.242603   43455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8974.pem
	I1216 03:32:26.247642   43455 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 16 02:36 /usr/share/ca-certificates/8974.pem
	I1216 03:32:26.247687   43455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8974.pem
	I1216 03:32:26.254285   43455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1216 03:32:26.265223   43455 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8974.pem /etc/ssl/certs/51391683.0
	I1216 03:32:26.276018   43455 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89742.pem
	I1216 03:32:26.287810   43455 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89742.pem /etc/ssl/certs/89742.pem
	I1216 03:32:26.300233   43455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89742.pem
	I1216 03:32:26.305053   43455 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 16 02:36 /usr/share/ca-certificates/89742.pem
	I1216 03:32:26.305107   43455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89742.pem
	I1216 03:32:26.312056   43455 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1216 03:32:26.322448   43455 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89742.pem /etc/ssl/certs/3ec20f2e.0
	I1216 03:32:26.333274   43455 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	W1216 03:32:22.060639   43066 pod_ready.go:104] pod "coredns-66bc5c9577-tf8wg" is not "Ready", error: <nil>
	W1216 03:32:24.061483   43066 pod_ready.go:104] pod "coredns-66bc5c9577-tf8wg" is not "Ready", error: <nil>
	W1216 03:32:26.561121   43066 pod_ready.go:104] pod "coredns-66bc5c9577-tf8wg" is not "Ready", error: <nil>
	I1216 03:32:26.337938   43455 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 03:32:26.337985   43455 kubeadm.go:401] StartCluster: {Name:kindnet-079027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2
ClusterName:kindnet-079027 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.72.85 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 03:32:26.338047   43455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 03:32:26.338101   43455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 03:32:26.371606   43455 cri.go:89] found id: ""
	I1216 03:32:26.371681   43455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 03:32:26.383120   43455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 03:32:26.395964   43455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 03:32:26.407347   43455 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 03:32:26.407371   43455 kubeadm.go:158] found existing configuration files:
	
	I1216 03:32:26.407411   43455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 03:32:26.417911   43455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 03:32:26.417971   43455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 03:32:26.428610   43455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 03:32:26.438412   43455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 03:32:26.438469   43455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 03:32:26.448950   43455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 03:32:26.458899   43455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 03:32:26.458945   43455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 03:32:26.469333   43455 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 03:32:26.479210   43455 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 03:32:26.479261   43455 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 03:32:26.489956   43455 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1216 03:32:26.537167   43455 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1216 03:32:26.537216   43455 kubeadm.go:319] [preflight] Running pre-flight checks
	I1216 03:32:26.630470   43455 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 03:32:26.630647   43455 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 03:32:26.630766   43455 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 03:32:26.642423   43455 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1216 03:32:26.269297   43267 pod_ready.go:104] pod "coredns-66bc5c9577-rcwxg" is not "Ready", error: <nil>
	W1216 03:32:28.767318   43267 pod_ready.go:104] pod "coredns-66bc5c9577-rcwxg" is not "Ready", error: <nil>
	I1216 03:32:26.643958   43455 out.go:252]   - Generating certificates and keys ...
	I1216 03:32:26.644045   43455 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1216 03:32:26.644129   43455 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1216 03:32:27.110875   43455 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 03:32:27.797875   43455 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1216 03:32:28.010426   43455 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1216 03:32:28.386109   43455 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1216 03:32:28.831264   43455 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1216 03:32:28.831582   43455 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [kindnet-079027 localhost] and IPs [192.168.72.85 127.0.0.1 ::1]
	I1216 03:32:28.972790   43455 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1216 03:32:28.973078   43455 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [kindnet-079027 localhost] and IPs [192.168.72.85 127.0.0.1 ::1]
	I1216 03:32:29.200041   43455 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 03:32:29.620065   43455 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 03:32:29.874239   43455 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1216 03:32:29.875035   43455 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 03:32:30.031559   43455 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 03:32:30.155587   43455 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 03:32:30.250334   43455 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 03:32:30.520222   43455 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 03:32:30.836257   43455 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 03:32:30.836392   43455 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 03:32:30.838520   43455 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 03:32:30.840167   43455 out.go:252]   - Booting up control plane ...
	I1216 03:32:30.840284   43455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 03:32:30.840391   43455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 03:32:30.840483   43455 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 03:32:30.856463   43455 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 03:32:30.856668   43455 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1216 03:32:30.863390   43455 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1216 03:32:30.863636   43455 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 03:32:30.863703   43455 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1216 03:32:31.046118   43455 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 03:32:31.046281   43455 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1216 03:32:29.061836   43066 pod_ready.go:104] pod "coredns-66bc5c9577-tf8wg" is not "Ready", error: <nil>
	W1216 03:32:31.559306   43066 pod_ready.go:104] pod "coredns-66bc5c9577-tf8wg" is not "Ready", error: <nil>
	W1216 03:32:30.768164   43267 pod_ready.go:104] pod "coredns-66bc5c9577-rcwxg" is not "Ready", error: <nil>
	I1216 03:32:32.267695   43267 pod_ready.go:94] pod "coredns-66bc5c9577-rcwxg" is "Ready"
	I1216 03:32:32.267731   43267 pod_ready.go:86] duration metric: took 8.005991377s for pod "coredns-66bc5c9577-rcwxg" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:32.270893   43267 pod_ready.go:83] waiting for pod "etcd-pause-127368" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:32.275755   43267 pod_ready.go:94] pod "etcd-pause-127368" is "Ready"
	I1216 03:32:32.275779   43267 pod_ready.go:86] duration metric: took 4.859537ms for pod "etcd-pause-127368" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:32.278307   43267 pod_ready.go:83] waiting for pod "kube-apiserver-pause-127368" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:33.286540   43267 pod_ready.go:94] pod "kube-apiserver-pause-127368" is "Ready"
	I1216 03:32:33.286577   43267 pod_ready.go:86] duration metric: took 1.008247465s for pod "kube-apiserver-pause-127368" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:33.289695   43267 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-127368" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:33.294699   43267 pod_ready.go:94] pod "kube-controller-manager-pause-127368" is "Ready"
	I1216 03:32:33.294718   43267 pod_ready.go:86] duration metric: took 4.994853ms for pod "kube-controller-manager-pause-127368" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:33.464945   43267 pod_ready.go:83] waiting for pod "kube-proxy-6tst4" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:33.865870   43267 pod_ready.go:94] pod "kube-proxy-6tst4" is "Ready"
	I1216 03:32:33.865902   43267 pod_ready.go:86] duration metric: took 400.925495ms for pod "kube-proxy-6tst4" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:34.065414   43267 pod_ready.go:83] waiting for pod "kube-scheduler-pause-127368" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:35.665155   43267 pod_ready.go:94] pod "kube-scheduler-pause-127368" is "Ready"
	I1216 03:32:35.665184   43267 pod_ready.go:86] duration metric: took 1.599744584s for pod "kube-scheduler-pause-127368" in "kube-system" namespace to be "Ready" or be gone ...
	I1216 03:32:35.665198   43267 pod_ready.go:40] duration metric: took 11.409563586s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1216 03:32:35.709029   43267 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1216 03:32:35.710544   43267 out.go:179] * Done! kubectl is now configured to use "pause-127368" cluster and "default" namespace by default
	I1216 03:32:31.547368   43455 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.188282ms
	I1216 03:32:31.550736   43455 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1216 03:32:31.550870   43455 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.72.85:8443/livez
	I1216 03:32:31.551022   43455 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1216 03:32:31.551185   43455 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1216 03:32:34.134652   43455 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.583147427s
	I1216 03:32:35.093111   43455 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.54197938s
	W1216 03:32:33.561017   43066 pod_ready.go:104] pod "coredns-66bc5c9577-tf8wg" is not "Ready", error: <nil>
	W1216 03:32:36.061968   43066 pod_ready.go:104] pod "coredns-66bc5c9577-tf8wg" is not "Ready", error: <nil>
	I1216 03:32:37.053200   43455 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501473676s
	I1216 03:32:37.089488   43455 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 03:32:37.112094   43455 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 03:32:37.130854   43455 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 03:32:37.131062   43455 kubeadm.go:319] [mark-control-plane] Marking the node kindnet-079027 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 03:32:37.157191   43455 kubeadm.go:319] [bootstrap-token] Using token: d9z9vh.30gs2ua52txf8zcw
	
	
	==> CRI-O <==
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.298112501Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b960414-eac1-4cfd-b926-183cdbd6e1ee name=/runtime.v1.RuntimeService/Version
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.299933819Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9947d599-2475-43ae-8e8f-c357ef3bd5ad name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.301208757Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765855958301177908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9947d599-2475-43ae-8e8f-c357ef3bd5ad name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.302255588Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2bcb0be6-3725-4065-bb2b-3a20db8e998e name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.302320150Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2bcb0be6-3725-4065-bb2b-3a20db8e998e name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.302889416Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30cbfcf72c599e2f73f84aa02a68c130b8e7b215afd97bea124adf804a1a61cd,PodSandboxId:fa9452a9930b7fa54c0769f7dcd06a96965a9d8992337228ed778d536be4aa2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765855942253086054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tst4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5bc773a-8ef2-4f79-bdd3-ead643257601,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1659127a04c0e595791c31ebcd6c3cedcd9c75aa05195038dc70a120b78024d6,PodSandboxId:9c94e1cb0dd180d945fec246cfca9cddeada9362ad391b96e90bf3637b16b89d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765855942263923773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rcwxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4c343db-7dab-4de5-89f2-ce2687b6631f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb21282af131f6b326f61f602cb404d88d036dbaf91f58041b9d0cc59a457f7,PodSandboxId:dae9ca1923edaf49e343cba4e7db0bb11a35c290c3b22a27e42b7e493908a7be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765855938427364711,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582fb99923c3b4e606630b30dbd77848,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac2603a4581191c10d4b3e10f52b0b12af70b9f64c42741b7205147715c7078,PodSandboxId:18ab7bec90a901e061c388215a8dda583ecb32acea89125b131de26364383a3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:C
ONTAINER_RUNNING,CreatedAt:1765855938448790249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d448dfe87545fc587b352dd2eaa7a763,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c4b91c38409e3b515017c952463970ec783353ede06e6b8d23b2ad4d57fd9b,PodSandboxId:f2de72045bd453f040fc6752a3acecee68ede348807bf0988cb9d01cae15f88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765855938416880990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f172ccbfdd61b8b761a84f05e6d663b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eac74d80c5016151ab3e0030f99826d0bb335fdbaf666b409fc395d37c2c73f,PodSandboxId:7e479d126c3d4b0c4c9a19f1c82ac9acbe2fee3e6b99d562f65f8f7e28db08c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a
5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765855938402228366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d64ac4f35b628e0d630c2501f74195a7,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7,PodSandboxId:9c94e1cb0dd180d945fec246cfca9cddeada9362ad391b
96e90bf3637b16b89d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765855914547417482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rcwxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4c343db-7dab-4de5-89f2-ce2687b6631f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48,PodSandboxId:fa9452a9930b7fa54c0769f7dcd06a96965a9d8992337228ed778d536be4aa2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765855913538605352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tst4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5bc773a-8ef2-4f79-bdd3-ead643257601,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b867c1bdad08e7d4b11632415fa3ce077d2fbbc7d269b68165673c2c8ac4e0,PodSandboxId:dae9ca1923edaf49e343cba4e7db0bb11a35c290c3b22a27e42b7e493908a7be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765855913395194273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582fb99923c3b4e606630b30dbd77848,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1603771fc3bb957264f0e0f69af13f99d97b23a395e946185f6f8606e58076e3,PodSandboxId:7e479d126c3d4b0c4c9a19f1c82ac9acbe2fee3e6b99d562f65f8f7e28db08c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765855913432386960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d64ac4f35b628e0d630c2501f74195a7,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1943eab5540b2524451fd9149e5bdb48d529525992e4427f98907806a8120c,PodSandboxId:18ab7bec90a901e061c388215a8dda583ecb32acea89125b131de26364383a3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765855913425395652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d448dfe87545fc587b352dd2eaa7a763,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af7add7fe9690f182b65bb9686b7a53b05509c0da8b8a4bc1586130f494913d,PodSandboxId:f2de72045bd453f040fc6752a3acecee68ede348807bf0988cb9d01cae15f88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765855913412415764,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f172ccbfdd61b8b761a84f05e6d663b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2bcb0be6-3725-4065-bb2b-3a20db8e998e name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.320814402Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=280762e9-d923-4bda-a58f-832974995912 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.321052279Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:9c94e1cb0dd180d945fec246cfca9cddeada9362ad391b96e90bf3637b16b89d,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-rcwxg,Uid:b4c343db-7dab-4de5-89f2-ce2687b6631f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765855913204232454,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-rcwxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4c343db-7dab-4de5-89f2-ce2687b6631f,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-16T03:31:05.859371477Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fa9452a9930b7fa54c0769f7dcd06a96965a9d8992337228ed778d536be4aa2f,Metadata:&PodSandboxMetadata{Name:kube-proxy-6tst4,Uid:c5bc773a-8ef2-4f79-bdd3-ead643257601,Namespace:kube-system,Attempt
:1,},State:SANDBOX_READY,CreatedAt:1765855912963123786,Labels:map[string]string{controller-revision-hash: 66d5f8d6f6,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6tst4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5bc773a-8ef2-4f79-bdd3-ead643257601,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-12-16T03:31:05.653871945Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:18ab7bec90a901e061c388215a8dda583ecb32acea89125b131de26364383a3d,Metadata:&PodSandboxMetadata{Name:etcd-pause-127368,Uid:d448dfe87545fc587b352dd2eaa7a763,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765855912961982544,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d448dfe87545fc587b352dd2eaa7a763,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/
etcd.advertise-client-urls: https://192.168.83.23:2379,kubernetes.io/config.hash: d448dfe87545fc587b352dd2eaa7a763,kubernetes.io/config.seen: 2025-12-16T03:31:00.474294105Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7e479d126c3d4b0c4c9a19f1c82ac9acbe2fee3e6b99d562f65f8f7e28db08c9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-127368,Uid:d64ac4f35b628e0d630c2501f74195a7,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765855912925323676,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d64ac4f35b628e0d630c2501f74195a7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.23:8443,kubernetes.io/config.hash: d64ac4f35b628e0d630c2501f74195a7,kubernetes.io/config.seen: 2025-12-16T03:31:00.474297613Z,kubernetes.io/config.source: file,},RuntimeHan
dler:,},&PodSandbox{Id:dae9ca1923edaf49e343cba4e7db0bb11a35c290c3b22a27e42b7e493908a7be,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-127368,Uid:582fb99923c3b4e606630b30dbd77848,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765855912894943280,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582fb99923c3b4e606630b30dbd77848,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 582fb99923c3b4e606630b30dbd77848,kubernetes.io/config.seen: 2025-12-16T03:31:00.474299712Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f2de72045bd453f040fc6752a3acecee68ede348807bf0988cb9d01cae15f88d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-127368,Uid:4f172ccbfdd61b8b761a84f05e6d663b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1765855912878424675,Labels:map[string]strin
g{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f172ccbfdd61b8b761a84f05e6d663b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4f172ccbfdd61b8b761a84f05e6d663b,kubernetes.io/config.seen: 2025-12-16T03:31:00.474298743Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=280762e9-d923-4bda-a58f-832974995912 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.322953709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=edbb5d9d-21d4-4ef3-8d7f-0d7f09b28854 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.323043405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=edbb5d9d-21d4-4ef3-8d7f-0d7f09b28854 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.323357502Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30cbfcf72c599e2f73f84aa02a68c130b8e7b215afd97bea124adf804a1a61cd,PodSandboxId:fa9452a9930b7fa54c0769f7dcd06a96965a9d8992337228ed778d536be4aa2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765855942253086054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tst4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5bc773a-8ef2-4f79-bdd3-ead643257601,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1659127a04c0e595791c31ebcd6c3cedcd9c75aa05195038dc70a120b78024d6,PodSandboxId:9c94e1cb0dd180d945fec246cfca9cddeada9362ad391b96e90bf3637b16b89d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765855942263923773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rcwxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4c343db-7dab-4de5-89f2-ce2687b6631f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb21282af131f6b326f61f602cb404d88d036dbaf91f58041b9d0cc59a457f7,PodSandboxId:dae9ca1923edaf49e343cba4e7db0bb11a35c290c3b22a27e42b7e493908a7be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765855938427364711,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582fb99923c3b4e606630b30dbd77848,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac2603a4581191c10d4b3e10f52b0b12af70b9f64c42741b7205147715c7078,PodSandboxId:18ab7bec90a901e061c388215a8dda583ecb32acea89125b131de26364383a3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:C
ONTAINER_RUNNING,CreatedAt:1765855938448790249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d448dfe87545fc587b352dd2eaa7a763,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c4b91c38409e3b515017c952463970ec783353ede06e6b8d23b2ad4d57fd9b,PodSandboxId:f2de72045bd453f040fc6752a3acecee68ede348807bf0988cb9d01cae15f88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765855938416880990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f172ccbfdd61b8b761a84f05e6d663b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eac74d80c5016151ab3e0030f99826d0bb335fdbaf666b409fc395d37c2c73f,PodSandboxId:7e479d126c3d4b0c4c9a19f1c82ac9acbe2fee3e6b99d562f65f8f7e28db08c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a
5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765855938402228366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d64ac4f35b628e0d630c2501f74195a7,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7,PodSandboxId:9c94e1cb0dd180d945fec246cfca9cddeada9362ad391b
96e90bf3637b16b89d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765855914547417482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rcwxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4c343db-7dab-4de5-89f2-ce2687b6631f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48,PodSandboxId:fa9452a9930b7fa54c0769f7dcd06a96965a9d8992337228ed778d536be4aa2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765855913538605352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tst4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5bc773a-8ef2-4f79-bdd3-ead643257601,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b867c1bdad08e7d4b11632415fa3ce077d2fbbc7d269b68165673c2c8ac4e0,PodSandboxId:dae9ca1923edaf49e343cba4e7db0bb11a35c290c3b22a27e42b7e493908a7be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765855913395194273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582fb99923c3b4e606630b30dbd77848,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1603771fc3bb957264f0e0f69af13f99d97b23a395e946185f6f8606e58076e3,PodSandboxId:7e479d126c3d4b0c4c9a19f1c82ac9acbe2fee3e6b99d562f65f8f7e28db08c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765855913432386960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d64ac4f35b628e0d630c2501f74195a7,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1943eab5540b2524451fd9149e5bdb48d529525992e4427f98907806a8120c,PodSandboxId:18ab7bec90a901e061c388215a8dda583ecb32acea89125b131de26364383a3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765855913425395652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d448dfe87545fc587b352dd2eaa7a763,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af7add7fe9690f182b65bb9686b7a53b05509c0da8b8a4bc1586130f494913d,PodSandboxId:f2de72045bd453f040fc6752a3acecee68ede348807bf0988cb9d01cae15f88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765855913412415764,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f172ccbfdd61b8b761a84f05e6d663b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=edbb5d9d-21d4-4ef3-8d7f-0d7f09b28854 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.350347583Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4032c4e1-f108-4d48-bb5c-fa5d007321fc name=/runtime.v1.RuntimeService/Version
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.350424483Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4032c4e1-f108-4d48-bb5c-fa5d007321fc name=/runtime.v1.RuntimeService/Version
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.351603334Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4bc85b2-3548-41c7-9f47-9e9d58873ced name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.351928720Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765855958351910689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4bc85b2-3548-41c7-9f47-9e9d58873ced name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.352733430Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76a6c762-4cd2-45b9-91be-ea26b2a61072 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.352796770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76a6c762-4cd2-45b9-91be-ea26b2a61072 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.353009745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30cbfcf72c599e2f73f84aa02a68c130b8e7b215afd97bea124adf804a1a61cd,PodSandboxId:fa9452a9930b7fa54c0769f7dcd06a96965a9d8992337228ed778d536be4aa2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765855942253086054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tst4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5bc773a-8ef2-4f79-bdd3-ead643257601,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1659127a04c0e595791c31ebcd6c3cedcd9c75aa05195038dc70a120b78024d6,PodSandboxId:9c94e1cb0dd180d945fec246cfca9cddeada9362ad391b96e90bf3637b16b89d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765855942263923773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rcwxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4c343db-7dab-4de5-89f2-ce2687b6631f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb21282af131f6b326f61f602cb404d88d036dbaf91f58041b9d0cc59a457f7,PodSandboxId:dae9ca1923edaf49e343cba4e7db0bb11a35c290c3b22a27e42b7e493908a7be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765855938427364711,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582fb99923c3b4e606630b30dbd77848,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac2603a4581191c10d4b3e10f52b0b12af70b9f64c42741b7205147715c7078,PodSandboxId:18ab7bec90a901e061c388215a8dda583ecb32acea89125b131de26364383a3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:C
ONTAINER_RUNNING,CreatedAt:1765855938448790249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d448dfe87545fc587b352dd2eaa7a763,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c4b91c38409e3b515017c952463970ec783353ede06e6b8d23b2ad4d57fd9b,PodSandboxId:f2de72045bd453f040fc6752a3acecee68ede348807bf0988cb9d01cae15f88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765855938416880990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f172ccbfdd61b8b761a84f05e6d663b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eac74d80c5016151ab3e0030f99826d0bb335fdbaf666b409fc395d37c2c73f,PodSandboxId:7e479d126c3d4b0c4c9a19f1c82ac9acbe2fee3e6b99d562f65f8f7e28db08c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a
5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765855938402228366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d64ac4f35b628e0d630c2501f74195a7,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7,PodSandboxId:9c94e1cb0dd180d945fec246cfca9cddeada9362ad391b
96e90bf3637b16b89d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765855914547417482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rcwxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4c343db-7dab-4de5-89f2-ce2687b6631f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48,PodSandboxId:fa9452a9930b7fa54c0769f7dcd06a96965a9d8992337228ed778d536be4aa2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765855913538605352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tst4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5bc773a-8ef2-4f79-bdd3-ead643257601,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b867c1bdad08e7d4b11632415fa3ce077d2fbbc7d269b68165673c2c8ac4e0,PodSandboxId:dae9ca1923edaf49e343cba4e7db0bb11a35c290c3b22a27e42b7e493908a7be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765855913395194273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582fb99923c3b4e606630b30dbd77848,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1603771fc3bb957264f0e0f69af13f99d97b23a395e946185f6f8606e58076e3,PodSandboxId:7e479d126c3d4b0c4c9a19f1c82ac9acbe2fee3e6b99d562f65f8f7e28db08c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765855913432386960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d64ac4f35b628e0d630c2501f74195a7,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1943eab5540b2524451fd9149e5bdb48d529525992e4427f98907806a8120c,PodSandboxId:18ab7bec90a901e061c388215a8dda583ecb32acea89125b131de26364383a3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765855913425395652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d448dfe87545fc587b352dd2eaa7a763,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af7add7fe9690f182b65bb9686b7a53b05509c0da8b8a4bc1586130f494913d,PodSandboxId:f2de72045bd453f040fc6752a3acecee68ede348807bf0988cb9d01cae15f88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765855913412415764,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f172ccbfdd61b8b761a84f05e6d663b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76a6c762-4cd2-45b9-91be-ea26b2a61072 name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.388962742Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=030c1584-6798-4081-bcd0-bfe8dedeedcf name=/runtime.v1.RuntimeService/Version
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.389178882Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=030c1584-6798-4081-bcd0-bfe8dedeedcf name=/runtime.v1.RuntimeService/Version
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.390678003Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=72fe40cc-b99e-414b-9602-2b2b2ce6774d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.391164786Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1765855958391139782,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124341,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=72fe40cc-b99e-414b-9602-2b2b2ce6774d name=/runtime.v1.ImageService/ImageFsInfo
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.392254879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0314a34-8d1a-4c3f-81ad-10177425a1dc name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.392306497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0314a34-8d1a-4c3f-81ad-10177425a1dc name=/runtime.v1.RuntimeService/ListContainers
	Dec 16 03:32:38 pause-127368 crio[2811]: time="2025-12-16 03:32:38.392636034Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:30cbfcf72c599e2f73f84aa02a68c130b8e7b215afd97bea124adf804a1a61cd,PodSandboxId:fa9452a9930b7fa54c0769f7dcd06a96965a9d8992337228ed778d536be4aa2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_RUNNING,CreatedAt:1765855942253086054,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tst4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5bc773a-8ef2-4f79-bdd3-ead643257601,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1659127a04c0e595791c31ebcd6c3cedcd9c75aa05195038dc70a120b78024d6,PodSandboxId:9c94e1cb0dd180d945fec246cfca9cddeada9362ad391b96e90bf3637b16b89d,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1765855942263923773,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rcwxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4c343db-7dab-4de5-89f2-ce2687b6631f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcb21282af131f6b326f61f602cb404d88d036dbaf91f58041b9d0cc59a457f7,PodSandboxId:dae9ca1923edaf49e343cba4e7db0bb11a35c290c3b22a27e42b7e493908a7be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_RUNNING,CreatedAt:1765855938427364711,Labels:map[string]string{io.kubernetes.container
.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582fb99923c3b4e606630b30dbd77848,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eac2603a4581191c10d4b3e10f52b0b12af70b9f64c42741b7205147715c7078,PodSandboxId:18ab7bec90a901e061c388215a8dda583ecb32acea89125b131de26364383a3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:C
ONTAINER_RUNNING,CreatedAt:1765855938448790249,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d448dfe87545fc587b352dd2eaa7a763,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5c4b91c38409e3b515017c952463970ec783353ede06e6b8d23b2ad4d57fd9b,PodSandboxId:f2de72045bd453f040fc6752a3acecee68ede348807bf0988cb9d01cae15f88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_RUNNING,CreatedAt:1765855938416880990,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f172ccbfdd61b8b761a84f05e6d663b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eac74d80c5016151ab3e0030f99826d0bb335fdbaf666b409fc395d37c2c73f,PodSandboxId:7e479d126c3d4b0c4c9a19f1c82ac9acbe2fee3e6b99d562f65f8f7e28db08c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:a
5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_RUNNING,CreatedAt:1765855938402228366,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d64ac4f35b628e0d630c2501f74195a7,},Annotations:map[string]string{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7,PodSandboxId:9c94e1cb0dd180d945fec246cfca9cddeada9362ad391b
96e90bf3637b16b89d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1765855914547417482,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-rcwxg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4c343db-7dab-4de5-89f2-ce2687b6631f,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48,PodSandboxId:fa9452a9930b7fa54c0769f7dcd06a96965a9d8992337228ed778d536be4aa2f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45,State:CONTAINER_EXITED,CreatedAt:1765855913538605352,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6tst4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5bc773a-8ef2-4f79-bdd3-ead643257601,},Annotations:map[string]string{io.kubernetes.container.hash: 3b839fb3,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8b867c1bdad08e7d4b11632415fa3ce077d2fbbc7d269b68165673c2c8ac4e0,PodSandboxId:dae9ca1923edaf49e343cba4e7db0bb11a35c290c3b22a27e42b7e493908a7be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952,State:CONTAINER_EXITED,CreatedAt:1765855913395194273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 582fb99923c3b4e606630b30dbd77848,},Annotations:map[string]string{io.kubernetes.container.hash: e7f4971d,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hos
tPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1603771fc3bb957264f0e0f69af13f99d97b23a395e946185f6f8606e58076e3,PodSandboxId:7e479d126c3d4b0c4c9a19f1c82ac9acbe2fee3e6b99d562f65f8f7e28db08c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85,State:CONTAINER_EXITED,CreatedAt:1765855913432386960,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d64ac4f35b628e0d630c2501f74195a7,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: b9a837f5,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e1943eab5540b2524451fd9149e5bdb48d529525992e4427f98907806a8120c,PodSandboxId:18ab7bec90a901e061c388215a8dda583ecb32acea89125b131de26364383a3d,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1,State:CONTAINER_EXITED,CreatedAt:1765855913425395652,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d448dfe87545fc587b352dd2eaa7a763,},Annotations:map[string]string{io.kubernetes.container.hash: 5a6992ae,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6af7add7fe9690f182b65bb9686b7a53b05509c0da8b8a4bc1586130f494913d,PodSandboxId:f2de72045bd453f040fc6752a3acecee68ede348807bf0988cb9d01cae15f88d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8,State:CONTAINER_EXITED,CreatedAt:1765855913412415764,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-127368,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f172ccbfdd61b8b761a84f05e6d663b,},Annotations:map[string]string{io.kubernetes.container.hash: 53c47387,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e0314a34-8d1a-4c3f-81ad-10177425a1dc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	1659127a04c0e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 seconds ago      Running             coredns                   2                   9c94e1cb0dd18       coredns-66bc5c9577-rcwxg               kube-system
	30cbfcf72c599       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   16 seconds ago      Running             kube-proxy                2                   fa9452a9930b7       kube-proxy-6tst4                       kube-system
	eac2603a45811       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   20 seconds ago      Running             etcd                      2                   18ab7bec90a90       etcd-pause-127368                      kube-system
	bcb21282af131       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   20 seconds ago      Running             kube-scheduler            2                   dae9ca1923eda       kube-scheduler-pause-127368            kube-system
	a5c4b91c38409       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   20 seconds ago      Running             kube-controller-manager   2                   f2de72045bd45       kube-controller-manager-pause-127368   kube-system
	1eac74d80c501       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   20 seconds ago      Running             kube-apiserver            2                   7e479d126c3d4       kube-apiserver-pause-127368            kube-system
	c73fce5742a57       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   43 seconds ago      Exited              coredns                   1                   9c94e1cb0dd18       coredns-66bc5c9577-rcwxg               kube-system
	7886b3cb93db3       8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45   44 seconds ago      Exited              kube-proxy                1                   fa9452a9930b7       kube-proxy-6tst4                       kube-system
	1603771fc3bb9       a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85   45 seconds ago      Exited              kube-apiserver            1                   7e479d126c3d4       kube-apiserver-pause-127368            kube-system
	1e1943eab5540       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1   45 seconds ago      Exited              etcd                      1                   18ab7bec90a90       etcd-pause-127368                      kube-system
	6af7add7fe969       01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8   45 seconds ago      Exited              kube-controller-manager   1                   f2de72045bd45       kube-controller-manager-pause-127368   kube-system
	b8b867c1bdad0       88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952   45 seconds ago      Exited              kube-scheduler            1                   dae9ca1923eda       kube-scheduler-pause-127368            kube-system
	
	
	==> coredns [1659127a04c0e595791c31ebcd6c3cedcd9c75aa05195038dc70a120b78024d6] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40926 - 15620 "HINFO IN 9096102086732634136.7010297379036465662. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032944409s
	
	
	==> coredns [c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:44559 - 51558 "HINFO IN 3322695004666707743.1788114612334312608. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.063784547s
	
	
	==> describe nodes <==
	Name:               pause-127368
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-127368
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5b7b13696cde014ddc06afed585902028fcb1b3e
	                    minikube.k8s.io/name=pause-127368
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_16T03_31_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Dec 2025 03:30:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-127368
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Dec 2025 03:32:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Dec 2025 03:32:21 +0000   Tue, 16 Dec 2025 03:30:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Dec 2025 03:32:21 +0000   Tue, 16 Dec 2025 03:30:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Dec 2025 03:32:21 +0000   Tue, 16 Dec 2025 03:30:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Dec 2025 03:32:21 +0000   Tue, 16 Dec 2025 03:31:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.23
	  Hostname:    pause-127368
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 bdd176aec0a44d47b426ef6399527a4a
	  System UUID:                bdd176ae-c0a4-4d47-b426-ef6399527a4a
	  Boot ID:                    49c938f2-a066-4ecd-abb5-79dd6b2937b0
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-rcwxg                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     93s
	  kube-system                 etcd-pause-127368                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         98s
	  kube-system                 kube-apiserver-pause-127368             250m (12%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-controller-manager-pause-127368    200m (10%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-proxy-6tst4                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-scheduler-pause-127368             100m (5%)     0 (0%)      0 (0%)           0 (0%)         98s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 91s                kube-proxy       
	  Normal  Starting                 15s                kube-proxy       
	  Normal  Starting                 39s                kube-proxy       
	  Normal  Starting                 98s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  98s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  98s                kubelet          Node pause-127368 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s                kubelet          Node pause-127368 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s                kubelet          Node pause-127368 status is now: NodeHasSufficientPID
	  Normal  NodeReady                97s                kubelet          Node pause-127368 status is now: NodeReady
	  Normal  RegisteredNode           94s                node-controller  Node pause-127368 event: Registered Node pause-127368 in Controller
	  Normal  RegisteredNode           37s                node-controller  Node pause-127368 event: Registered Node pause-127368 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)  kubelet          Node pause-127368 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)  kubelet          Node pause-127368 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)  kubelet          Node pause-127368 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                node-controller  Node pause-127368 event: Registered Node pause-127368 in Controller
	
	
	==> dmesg <==
	[Dec16 03:30] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000069] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.013878] (rpcbind)[118]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.191906] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.086066] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.096140] kauditd_printk_skb: 102 callbacks suppressed
	[Dec16 03:31] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.494816] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.104932] kauditd_printk_skb: 225 callbacks suppressed
	[ +21.513283] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.462103] kauditd_printk_skb: 297 callbacks suppressed
	[Dec16 03:32] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.120737] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.249338] kauditd_printk_skb: 112 callbacks suppressed
	[  +7.938767] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [1e1943eab5540b2524451fd9149e5bdb48d529525992e4427f98907806a8120c] <==
	{"level":"warn","ts":"2025-12-16T03:31:57.410762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:31:57.423067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:31:57.448874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:31:57.470852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:31:57.490746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:31:57.515375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:31:57.581838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37386","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-16T03:32:15.435036Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-16T03:32:15.435109Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-127368","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.23:2380"],"advertise-client-urls":["https://192.168.83.23:2379"]}
	{"level":"error","ts":"2025-12-16T03:32:15.435197Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-16T03:32:15.435251Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-12-16T03:32:15.437107Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-16T03:32:15.437164Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-16T03:32:15.437171Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-16T03:32:15.437222Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.23:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-16T03:32:15.437229Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.23:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-16T03:32:15.437235Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.23:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-16T03:32:15.437267Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T03:32:15.437334Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"83122a6f182c046f","current-leader-member-id":"83122a6f182c046f"}
	{"level":"info","ts":"2025-12-16T03:32:15.437374Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-16T03:32:15.437403Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-16T03:32:15.440734Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.83.23:2380"}
	{"level":"error","ts":"2025-12-16T03:32:15.440809Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.23:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-16T03:32:15.440833Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.83.23:2380"}
	{"level":"info","ts":"2025-12-16T03:32:15.440839Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-127368","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.23:2380"],"advertise-client-urls":["https://192.168.83.23:2379"]}
	
	
	==> etcd [eac2603a4581191c10d4b3e10f52b0b12af70b9f64c42741b7205147715c7078] <==
	{"level":"warn","ts":"2025-12-16T03:32:20.213327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.250593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.267720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.278961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.298257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.309647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.321579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.341631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.350840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.360241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.374736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.391761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.399540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.425010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.441822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.470694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.474628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.489909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.500240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.511251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.529960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.568454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.579265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.586478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-16T03:32:20.638238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51888","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:32:38 up 2 min,  0 users,  load average: 1.22, 0.53, 0.20
	Linux pause-127368 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Dec 16 00:48:01 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [1603771fc3bb957264f0e0f69af13f99d97b23a395e946185f6f8606e58076e3] <==
	I1216 03:32:05.378910       1 storage_flowcontrol.go:172] APF bootstrap ensurer is exiting
	I1216 03:32:05.378921       1 cluster_authentication_trust_controller.go:482] Shutting down cluster_authentication_trust_controller controller
	I1216 03:32:05.378933       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I1216 03:32:05.378944       1 controller.go:132] Ending legacy_token_tracking_controller
	I1216 03:32:05.378948       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I1216 03:32:05.378956       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I1216 03:32:05.378968       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I1216 03:32:05.378975       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I1216 03:32:05.380119       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1216 03:32:05.380419       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1216 03:32:05.380550       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	I1216 03:32:05.380849       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1216 03:32:05.380883       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I1216 03:32:05.380952       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1216 03:32:05.380997       1 secure_serving.go:259] Stopped listening on [::]:8443
	I1216 03:32:05.381008       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1216 03:32:05.381162       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1216 03:32:05.381230       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1216 03:32:05.381271       1 controller.go:157] Shutting down quota evaluator
	I1216 03:32:05.381313       1 controller.go:176] quota evaluator worker shutdown
	I1216 03:32:05.382193       1 repairip.go:246] Shutting down ipallocator-repair-controller
	I1216 03:32:05.382747       1 controller.go:176] quota evaluator worker shutdown
	I1216 03:32:05.382772       1 controller.go:176] quota evaluator worker shutdown
	I1216 03:32:05.382778       1 controller.go:176] quota evaluator worker shutdown
	I1216 03:32:05.382781       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-apiserver [1eac74d80c5016151ab3e0030f99826d0bb335fdbaf666b409fc395d37c2c73f] <==
	I1216 03:32:21.496257       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1216 03:32:21.496338       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1216 03:32:21.500415       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1216 03:32:21.504573       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1216 03:32:21.504699       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1216 03:32:21.507847       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1216 03:32:21.507935       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1216 03:32:21.508017       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1216 03:32:21.508056       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1216 03:32:21.510571       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1216 03:32:21.510648       1 aggregator.go:171] initial CRD sync complete...
	I1216 03:32:21.510683       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 03:32:21.510695       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 03:32:21.510700       1 cache.go:39] Caches are synced for autoregister controller
	I1216 03:32:21.561145       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1216 03:32:21.971948       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1216 03:32:22.330894       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1216 03:32:23.119016       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.83.23]
	I1216 03:32:23.120607       1 controller.go:667] quota admission added evaluator for: endpoints
	I1216 03:32:23.126180       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 03:32:23.726236       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1216 03:32:23.787183       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1216 03:32:23.828605       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 03:32:23.839411       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 03:32:31.900086       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6af7add7fe9690f182b65bb9686b7a53b05509c0da8b8a4bc1586130f494913d] <==
	I1216 03:32:01.869782       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1216 03:32:01.869867       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1216 03:32:01.869875       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1216 03:32:01.869934       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 03:32:01.869945       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1216 03:32:01.869952       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1216 03:32:01.872584       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1216 03:32:01.872660       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1216 03:32:01.872706       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-127368"
	I1216 03:32:01.872735       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1216 03:32:01.872739       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1216 03:32:01.876612       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 03:32:01.878005       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1216 03:32:01.880440       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1216 03:32:01.882966       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1216 03:32:01.885524       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1216 03:32:01.885574       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1216 03:32:01.885578       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 03:32:01.886869       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1216 03:32:01.887943       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1216 03:32:01.890358       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1216 03:32:01.891378       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 03:32:01.892328       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1216 03:32:01.894853       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1216 03:32:01.894918       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [a5c4b91c38409e3b515017c952463970ec783353ede06e6b8d23b2ad4d57fd9b] <==
	I1216 03:32:24.845231       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1216 03:32:24.845371       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1216 03:32:24.845384       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1216 03:32:24.852557       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1216 03:32:24.856180       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1216 03:32:24.863799       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 03:32:24.864310       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1216 03:32:24.873599       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1216 03:32:24.873631       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1216 03:32:24.873637       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 03:32:24.878693       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1216 03:32:24.880599       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1216 03:32:24.880721       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1216 03:32:24.881004       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1216 03:32:24.881872       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1216 03:32:24.882198       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1216 03:32:24.882765       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1216 03:32:24.885197       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1216 03:32:24.886156       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1216 03:32:24.886277       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1216 03:32:24.886369       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-127368"
	I1216 03:32:24.886574       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1216 03:32:24.889296       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1216 03:32:24.902960       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1216 03:32:24.927018       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [30cbfcf72c599e2f73f84aa02a68c130b8e7b215afd97bea124adf804a1a61cd] <==
	I1216 03:32:22.652039       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 03:32:22.757632       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 03:32:22.758206       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.23"]
	E1216 03:32:22.758323       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 03:32:22.831719       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1216 03:32:22.831796       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 03:32:22.831830       1 server_linux.go:132] "Using iptables Proxier"
	I1216 03:32:22.851245       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 03:32:22.852881       1 server.go:527] "Version info" version="v1.34.2"
	I1216 03:32:22.852895       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:32:22.863562       1 config.go:200] "Starting service config controller"
	I1216 03:32:22.863692       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 03:32:22.864027       1 config.go:106] "Starting endpoint slice config controller"
	I1216 03:32:22.864309       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 03:32:22.864553       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 03:32:22.864886       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 03:32:22.864633       1 config.go:309] "Starting node config controller"
	I1216 03:32:22.865397       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 03:32:22.865660       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 03:32:22.964789       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 03:32:22.965259       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1216 03:32:22.965267       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48] <==
	I1216 03:31:56.300542       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1216 03:31:58.602341       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1216 03:31:58.602442       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.23"]
	E1216 03:31:58.602724       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 03:31:58.794971       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1216 03:31:58.795238       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1216 03:31:58.795338       1 server_linux.go:132] "Using iptables Proxier"
	I1216 03:31:58.867179       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 03:31:58.879911       1 server.go:527] "Version info" version="v1.34.2"
	I1216 03:31:58.879955       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:31:58.926927       1 config.go:309] "Starting node config controller"
	I1216 03:31:58.927019       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1216 03:31:58.927049       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1216 03:31:58.928100       1 config.go:200] "Starting service config controller"
	I1216 03:31:58.928165       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1216 03:31:58.928212       1 config.go:106] "Starting endpoint slice config controller"
	I1216 03:31:58.928220       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1216 03:31:58.928238       1 config.go:403] "Starting serviceCIDR config controller"
	I1216 03:31:58.928243       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1216 03:31:59.028937       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1216 03:31:59.029051       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1216 03:31:59.029151       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b8b867c1bdad08e7d4b11632415fa3ce077d2fbbc7d269b68165673c2c8ac4e0] <==
	I1216 03:31:57.242947       1 serving.go:386] Generated self-signed cert in-memory
	I1216 03:31:59.145401       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 03:31:59.145439       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:31:59.151185       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 03:31:59.151276       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1216 03:31:59.151286       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1216 03:31:59.151309       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 03:31:59.153957       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:31:59.153983       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:31:59.153998       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 03:31:59.154003       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 03:31:59.251809       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1216 03:31:59.254100       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 03:31:59.254238       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:32:15.710286       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1216 03:32:15.710355       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:32:15.710387       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 03:32:15.710406       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1216 03:32:15.710976       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1216 03:32:15.711024       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1216 03:32:15.711043       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1216 03:32:15.711070       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [bcb21282af131f6b326f61f602cb404d88d036dbaf91f58041b9d0cc59a457f7] <==
	I1216 03:32:19.963173       1 serving.go:386] Generated self-signed cert in-memory
	I1216 03:32:22.619768       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1216 03:32:22.621634       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 03:32:22.650785       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1216 03:32:22.655790       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 03:32:22.656566       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:32:22.659264       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:32:22.656589       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 03:32:22.660323       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 03:32:22.655894       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1216 03:32:22.665707       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1216 03:32:22.760705       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1216 03:32:22.761530       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 03:32:22.765993       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 16 03:32:21 pause-127368 kubelet[3947]: E1216 03:32:21.235021    3947 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-127368\" not found" node="pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: E1216 03:32:21.260173    3947 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-127368\" not found" node="pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.424355    3947 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: E1216 03:32:21.553140    3947 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-127368\" already exists" pod="kube-system/kube-apiserver-pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.553435    3947 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: E1216 03:32:21.568706    3947 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-127368\" already exists" pod="kube-system/kube-controller-manager-pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.568746    3947 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: E1216 03:32:21.582317    3947 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-127368\" already exists" pod="kube-system/kube-scheduler-pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.582440    3947 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.591775    3947 kubelet_node_status.go:124] "Node was previously registered" node="pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.591886    3947 kubelet_node_status.go:78] "Successfully registered node" node="pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.591922    3947 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.594182    3947 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: E1216 03:32:21.608157    3947 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-127368\" already exists" pod="kube-system/etcd-pause-127368"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.909568    3947 apiserver.go:52] "Watching apiserver"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.921657    3947 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.964235    3947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5bc773a-8ef2-4f79-bdd3-ead643257601-xtables-lock\") pod \"kube-proxy-6tst4\" (UID: \"c5bc773a-8ef2-4f79-bdd3-ead643257601\") " pod="kube-system/kube-proxy-6tst4"
	Dec 16 03:32:21 pause-127368 kubelet[3947]: I1216 03:32:21.966744    3947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5bc773a-8ef2-4f79-bdd3-ead643257601-lib-modules\") pod \"kube-proxy-6tst4\" (UID: \"c5bc773a-8ef2-4f79-bdd3-ead643257601\") " pod="kube-system/kube-proxy-6tst4"
	Dec 16 03:32:22 pause-127368 kubelet[3947]: I1216 03:32:22.215982    3947 scope.go:117] "RemoveContainer" containerID="c73fce5742a5738a73591a4789422a3ac9a9ac34d107f86ed41a13551bad41a7"
	Dec 16 03:32:22 pause-127368 kubelet[3947]: I1216 03:32:22.216396    3947 scope.go:117] "RemoveContainer" containerID="7886b3cb93db3490c21067b3f08a44be6ead41881cd5cb2681c295485c892b48"
	Dec 16 03:32:28 pause-127368 kubelet[3947]: E1216 03:32:28.058447    3947 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765855948057766854 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 16 03:32:28 pause-127368 kubelet[3947]: E1216 03:32:28.058476    3947 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765855948057766854 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 16 03:32:31 pause-127368 kubelet[3947]: I1216 03:32:31.867781    3947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 16 03:32:38 pause-127368 kubelet[3947]: E1216 03:32:38.059782    3947 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1765855958059389819 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	Dec 16 03:32:38 pause-127368 kubelet[3947]: E1216 03:32:38.059803    3947 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1765855958059389819 fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"} used_bytes:{value:124341} inodes_used:{value:48}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-127368 -n pause-127368
helpers_test.go:270: (dbg) Run:  kubectl --context pause-127368 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (59.23s)

                                                
                                    

Test pass (376/431)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 21.47
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.2/json-events 9.17
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.07
18 TestDownloadOnly/v1.34.2/DeleteAll 0.15
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 9.75
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.15
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.63
31 TestOffline 102.95
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 125.27
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/serial/GCPAuth/FakeCredentials 9.52
44 TestAddons/parallel/Registry 17.44
45 TestAddons/parallel/RegistryCreds 0.69
47 TestAddons/parallel/InspektorGadget 10.66
48 TestAddons/parallel/MetricsServer 5.75
50 TestAddons/parallel/CSI 61.63
51 TestAddons/parallel/Headlamp 21.94
52 TestAddons/parallel/CloudSpanner 6.51
53 TestAddons/parallel/LocalPath 14.25
54 TestAddons/parallel/NvidiaDevicePlugin 6.7
55 TestAddons/parallel/Yakd 11.71
57 TestAddons/StoppedEnableDisable 88.49
58 TestCertOptions 40.11
59 TestCertExpiration 258.1
61 TestForceSystemdFlag 60.78
62 TestForceSystemdEnv 54.2
67 TestErrorSpam/setup 35.03
68 TestErrorSpam/start 0.31
69 TestErrorSpam/status 0.62
70 TestErrorSpam/pause 1.44
71 TestErrorSpam/unpause 1.7
72 TestErrorSpam/stop 5.08
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 49.16
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 33.35
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.07
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.33
84 TestFunctional/serial/CacheCmd/cache/add_local 2.08
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.33
89 TestFunctional/serial/CacheCmd/cache/delete 0.11
90 TestFunctional/serial/MinikubeKubectlCmd 0.12
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 34.91
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.22
95 TestFunctional/serial/LogsFileCmd 1.19
96 TestFunctional/serial/InvalidService 4.07
98 TestFunctional/parallel/ConfigCmd 0.41
99 TestFunctional/parallel/DashboardCmd 12.75
100 TestFunctional/parallel/DryRun 0.23
101 TestFunctional/parallel/InternationalLanguage 0.13
102 TestFunctional/parallel/StatusCmd 0.7
106 TestFunctional/parallel/ServiceCmdConnect 12.52
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 39.96
110 TestFunctional/parallel/SSHCmd 0.36
111 TestFunctional/parallel/CpCmd 1.14
112 TestFunctional/parallel/MySQL 32.37
113 TestFunctional/parallel/FileSync 0.15
114 TestFunctional/parallel/CertSync 1.27
118 TestFunctional/parallel/NodeLabels 0.07
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
122 TestFunctional/parallel/License 0.33
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
124 TestFunctional/parallel/ServiceCmd/DeployApp 9.2
125 TestFunctional/parallel/ProfileCmd/profile_list 0.44
126 TestFunctional/parallel/MountCmd/any-port 9.16
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
128 TestFunctional/parallel/Version/short 0.06
129 TestFunctional/parallel/Version/components 0.42
130 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
131 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
132 TestFunctional/parallel/ImageCommands/ImageListJson 0.19
133 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
134 TestFunctional/parallel/ImageCommands/ImageBuild 3.8
135 TestFunctional/parallel/ImageCommands/Setup 1.73
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.04
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.79
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.62
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.48
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.83
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.89
143 TestFunctional/parallel/ServiceCmd/List 0.49
144 TestFunctional/parallel/MountCmd/specific-port 1.4
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
147 TestFunctional/parallel/ServiceCmd/Format 0.29
148 TestFunctional/parallel/ServiceCmd/URL 0.31
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.27
159 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
160 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
161 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 72.33
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 374.08
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.04
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 0.11
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.47
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 2.01
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.18
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.44
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 0.12
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 0.11
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 32.63
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 0.06
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.31
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.31
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 4.7
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.42
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 14.53
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.2
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.12
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 0.65
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 25.62
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 42.07
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.37
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 1.15
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 31.04
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.19
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.14
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 0.09
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.31
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.32
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.07
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.08
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.08
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 26.17
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.36
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.4
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.47
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.39
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.54
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 8.24
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.43
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.33
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.28
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.53
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.2
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.19
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.2
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.2
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.39
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.85
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.05
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.91
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.87
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.32
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.7
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.64
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 2.64
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.71
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.52
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.03
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.01
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.01
261 TestMultiControlPlane/serial/StartCluster 190.4
262 TestMultiControlPlane/serial/DeployApp 6.46
263 TestMultiControlPlane/serial/PingHostFromPods 1.25
264 TestMultiControlPlane/serial/AddWorkerNode 43.2
265 TestMultiControlPlane/serial/NodeLabels 0.06
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.64
267 TestMultiControlPlane/serial/CopyFile 10.38
268 TestMultiControlPlane/serial/StopSecondaryNode 82.07
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.48
270 TestMultiControlPlane/serial/RestartSecondaryNode 31.84
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.81
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 355.97
273 TestMultiControlPlane/serial/DeleteSecondaryNode 17.71
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.5
275 TestMultiControlPlane/serial/StopCluster 254.82
276 TestMultiControlPlane/serial/RestartCluster 91.81
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.49
278 TestMultiControlPlane/serial/AddSecondaryNode 70.98
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.65
284 TestJSONOutput/start/Command 76.34
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.7
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.61
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 6.79
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.22
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 74.81
316 TestMountStart/serial/StartWithMountFirst 19.42
317 TestMountStart/serial/VerifyMountFirst 0.29
318 TestMountStart/serial/StartWithMountSecond 18.63
319 TestMountStart/serial/VerifyMountSecond 0.3
320 TestMountStart/serial/DeleteFirst 0.67
321 TestMountStart/serial/VerifyMountPostDelete 0.3
322 TestMountStart/serial/Stop 1.28
323 TestMountStart/serial/RestartStopped 18
324 TestMountStart/serial/VerifyMountPostStop 0.29
327 TestMultiNode/serial/FreshStart2Nodes 92.73
328 TestMultiNode/serial/DeployApp2Nodes 6.06
329 TestMultiNode/serial/PingHostFrom2Pods 0.82
330 TestMultiNode/serial/AddNode 40.64
331 TestMultiNode/serial/MultiNodeLabels 0.06
332 TestMultiNode/serial/ProfileList 0.44
333 TestMultiNode/serial/CopyFile 5.91
334 TestMultiNode/serial/StopNode 2.14
335 TestMultiNode/serial/StartAfterStop 36.72
336 TestMultiNode/serial/RestartKeepsNodes 291.17
337 TestMultiNode/serial/DeleteNode 2.7
338 TestMultiNode/serial/StopMultiNode 169.48
339 TestMultiNode/serial/RestartMultiNode 90.17
340 TestMultiNode/serial/ValidateNameConflict 37.05
347 TestScheduledStopUnix 106.51
351 TestRunningBinaryUpgrade 368.32
353 TestKubernetesUpgrade 232.71
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
357 TestNoKubernetes/serial/StartWithK8s 76.38
358 TestNoKubernetes/serial/StartWithStopK8s 25.78
359 TestNoKubernetes/serial/Start 22.47
360 TestStoppedBinaryUpgrade/Setup 3.37
361 TestStoppedBinaryUpgrade/Upgrade 94.36
362 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
363 TestNoKubernetes/serial/VerifyK8sNotRunning 0.15
364 TestNoKubernetes/serial/ProfileList 29.89
365 TestNoKubernetes/serial/Stop 1.43
366 TestNoKubernetes/serial/StartNoArgs 17.5
367 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.16
368 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
376 TestNetworkPlugins/group/false 3.69
380 TestISOImage/Setup 20.63
382 TestISOImage/Binaries/crictl 0.17
383 TestISOImage/Binaries/curl 0.17
384 TestISOImage/Binaries/docker 0.16
385 TestISOImage/Binaries/git 0.16
386 TestISOImage/Binaries/iptables 0.19
387 TestISOImage/Binaries/podman 0.17
388 TestISOImage/Binaries/rsync 0.16
389 TestISOImage/Binaries/socat 0.17
390 TestISOImage/Binaries/wget 0.17
391 TestISOImage/Binaries/VBoxControl 0.16
392 TestISOImage/Binaries/VBoxService 0.17
394 TestPause/serial/Start 113.33
402 TestNetworkPlugins/group/auto/Start 80.47
404 TestNetworkPlugins/group/kindnet/Start 56.71
405 TestNetworkPlugins/group/calico/Start 80.21
406 TestNetworkPlugins/group/auto/KubeletFlags 0.18
407 TestNetworkPlugins/group/auto/NetCatPod 11.21
408 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
409 TestNetworkPlugins/group/auto/DNS 0.14
410 TestNetworkPlugins/group/auto/Localhost 0.12
411 TestNetworkPlugins/group/auto/HairPin 0.11
412 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
413 TestNetworkPlugins/group/kindnet/NetCatPod 11.29
414 TestNetworkPlugins/group/custom-flannel/Start 70.64
415 TestNetworkPlugins/group/kindnet/DNS 0.17
416 TestNetworkPlugins/group/kindnet/Localhost 0.14
417 TestNetworkPlugins/group/kindnet/HairPin 0.14
418 TestNetworkPlugins/group/enable-default-cni/Start 91.95
419 TestNetworkPlugins/group/flannel/Start 74.33
420 TestNetworkPlugins/group/calico/ControllerPod 6.01
421 TestNetworkPlugins/group/calico/KubeletFlags 0.23
422 TestNetworkPlugins/group/calico/NetCatPod 13.32
423 TestNetworkPlugins/group/calico/DNS 0.16
424 TestNetworkPlugins/group/calico/Localhost 0.13
425 TestNetworkPlugins/group/calico/HairPin 0.23
426 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.18
427 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.28
428 TestNetworkPlugins/group/custom-flannel/DNS 0.24
429 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
430 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
431 TestNetworkPlugins/group/bridge/Start 56.37
433 TestStartStop/group/old-k8s-version/serial/FirstStart 93.21
434 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
435 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.48
436 TestNetworkPlugins/group/flannel/ControllerPod 6.01
437 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
438 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
439 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
440 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
441 TestNetworkPlugins/group/flannel/NetCatPod 13.29
443 TestStartStop/group/no-preload/serial/FirstStart 89.67
444 TestNetworkPlugins/group/flannel/DNS 0.15
445 TestNetworkPlugins/group/flannel/Localhost 0.14
446 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
447 TestNetworkPlugins/group/flannel/HairPin 0.15
448 TestNetworkPlugins/group/bridge/NetCatPod 10.31
449 TestNetworkPlugins/group/bridge/DNS 0.2
450 TestNetworkPlugins/group/bridge/Localhost 0.15
451 TestNetworkPlugins/group/bridge/HairPin 0.15
453 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84.03
455 TestStartStop/group/newest-cni/serial/FirstStart 54.02
456 TestStartStop/group/old-k8s-version/serial/DeployApp 11.38
457 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.88
458 TestStartStop/group/old-k8s-version/serial/Stop 86.39
459 TestStartStop/group/newest-cni/serial/DeployApp 0
460 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.04
461 TestStartStop/group/newest-cni/serial/Stop 86.81
462 TestStartStop/group/no-preload/serial/DeployApp 11.29
463 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.28
464 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.97
465 TestStartStop/group/no-preload/serial/Stop 86.01
466 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.87
467 TestStartStop/group/default-k8s-diff-port/serial/Stop 87.49
468 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
469 TestStartStop/group/old-k8s-version/serial/SecondStart 44.43
470 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
471 TestStartStop/group/newest-cni/serial/SecondStart 32.82
472 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
473 TestStartStop/group/no-preload/serial/SecondStart 58.61
474 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
475 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 64.29
476 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 12.01
477 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
478 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
479 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
480 TestStartStop/group/newest-cni/serial/Pause 4.47
482 TestStartStop/group/embed-certs/serial/FirstStart 89.68
483 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
484 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
485 TestStartStop/group/old-k8s-version/serial/Pause 2.78
487 TestISOImage/PersistentMounts//data 0.18
488 TestISOImage/PersistentMounts//var/lib/docker 0.17
489 TestISOImage/PersistentMounts//var/lib/cni 0.18
490 TestISOImage/PersistentMounts//var/lib/kubelet 0.19
491 TestISOImage/PersistentMounts//var/lib/minikube 0.18
492 TestISOImage/PersistentMounts//var/lib/toolbox 0.17
493 TestISOImage/PersistentMounts//var/lib/boot2docker 0.17
494 TestISOImage/VersionJSON 0.24
495 TestISOImage/eBPFSupport 0.24
496 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
497 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.08
498 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
499 TestStartStop/group/no-preload/serial/Pause 3.13
500 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
501 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
502 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
503 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.41
504 TestStartStop/group/embed-certs/serial/DeployApp 9.25
505 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.85
506 TestStartStop/group/embed-certs/serial/Stop 82.73
507 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.14
508 TestStartStop/group/embed-certs/serial/SecondStart 44.18
509 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
510 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
511 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.19
512 TestStartStop/group/embed-certs/serial/Pause 2.32
x
+
TestDownloadOnly/v1.28.0/json-events (21.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-356549 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-356549 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (21.471862766s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (21.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1216 02:25:42.074665    8974 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1216 02:25:42.074757    8974 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-356549
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-356549: exit status 85 (68.848069ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-356549 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-356549 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:25:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:25:20.653164    8986 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:25:20.653260    8986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:25:20.653273    8986 out.go:374] Setting ErrFile to fd 2...
	I1216 02:25:20.653277    8986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:25:20.653456    8986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	W1216 02:25:20.653560    8986 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22158-5036/.minikube/config/config.json: open /home/jenkins/minikube-integration/22158-5036/.minikube/config/config.json: no such file or directory
	I1216 02:25:20.653994    8986 out.go:368] Setting JSON to true
	I1216 02:25:20.654823    8986 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":466,"bootTime":1765851455,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:25:20.654875    8986 start.go:143] virtualization: kvm guest
	I1216 02:25:20.659078    8986 out.go:99] [download-only-356549] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1216 02:25:20.659212    8986 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball: no such file or directory
	I1216 02:25:20.659263    8986 notify.go:221] Checking for updates...
	I1216 02:25:20.660328    8986 out.go:171] MINIKUBE_LOCATION=22158
	I1216 02:25:20.661542    8986 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:25:20.662821    8986 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 02:25:20.663889    8986 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 02:25:20.665053    8986 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1216 02:25:20.667049    8986 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 02:25:20.667254    8986 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:25:21.139636    8986 out.go:99] Using the kvm2 driver based on user configuration
	I1216 02:25:21.139669    8986 start.go:309] selected driver: kvm2
	I1216 02:25:21.139675    8986 start.go:927] validating driver "kvm2" against <nil>
	I1216 02:25:21.140029    8986 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 02:25:21.140552    8986 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1216 02:25:21.140693    8986 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 02:25:21.140714    8986 cni.go:84] Creating CNI manager for ""
	I1216 02:25:21.140769    8986 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 02:25:21.140778    8986 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 02:25:21.140811    8986 start.go:353] cluster config:
	{Name:download-only-356549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-356549 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:25:21.140986    8986 iso.go:125] acquiring lock: {Name:mk055aa36b1051bc664b283a8a6fb2af4db94c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:25:21.142493    8986 out.go:99] Downloading VM boot image ...
	I1216 02:25:21.142527    8986 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22158-5036/.minikube/cache/iso/amd64/minikube-v1.37.0-1765836331-22158-amd64.iso
	I1216 02:25:30.752773    8986 out.go:99] Starting "download-only-356549" primary control-plane node in "download-only-356549" cluster
	I1216 02:25:30.752811    8986 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1216 02:25:30.847539    8986 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1216 02:25:30.847573    8986 cache.go:65] Caching tarball of preloaded images
	I1216 02:25:30.847743    8986 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1216 02:25:30.849270    8986 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1216 02:25:30.849293    8986 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1216 02:25:30.946347    8986 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1216 02:25:30.946478    8986 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-356549 host does not exist
	  To start a cluster, run: "minikube start -p download-only-356549"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-356549
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (9.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-850299 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-850299 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.168336427s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (9.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1216 02:25:51.597067    8974 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1216 02:25:51.597114    8974 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-850299
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-850299: exit status 85 (69.336704ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-356549 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-356549 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-356549                                                                                                                                                 │ download-only-356549 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ start   │ -o=json --download-only -p download-only-850299 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-850299 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:25:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:25:42.479616    9227 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:25:42.479733    9227 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:25:42.479747    9227 out.go:374] Setting ErrFile to fd 2...
	I1216 02:25:42.479754    9227 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:25:42.479983    9227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 02:25:42.480474    9227 out.go:368] Setting JSON to true
	I1216 02:25:42.481425    9227 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":487,"bootTime":1765851455,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:25:42.481477    9227 start.go:143] virtualization: kvm guest
	I1216 02:25:42.483125    9227 out.go:99] [download-only-850299] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 02:25:42.483268    9227 notify.go:221] Checking for updates...
	I1216 02:25:42.484426    9227 out.go:171] MINIKUBE_LOCATION=22158
	I1216 02:25:42.485701    9227 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:25:42.486862    9227 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 02:25:42.488000    9227 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 02:25:42.489280    9227 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1216 02:25:42.491426    9227 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 02:25:42.491662    9227 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:25:42.523024    9227 out.go:99] Using the kvm2 driver based on user configuration
	I1216 02:25:42.523048    9227 start.go:309] selected driver: kvm2
	I1216 02:25:42.523056    9227 start.go:927] validating driver "kvm2" against <nil>
	I1216 02:25:42.523368    9227 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 02:25:42.523846    9227 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1216 02:25:42.524023    9227 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 02:25:42.524048    9227 cni.go:84] Creating CNI manager for ""
	I1216 02:25:42.524110    9227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 02:25:42.524128    9227 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 02:25:42.524179    9227 start.go:353] cluster config:
	{Name:download-only-850299 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-850299 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:25:42.524292    9227 iso.go:125] acquiring lock: {Name:mk055aa36b1051bc664b283a8a6fb2af4db94c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:25:42.525670    9227 out.go:99] Starting "download-only-850299" primary control-plane node in "download-only-850299" cluster
	I1216 02:25:42.525691    9227 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 02:25:42.679628    9227 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	I1216 02:25:42.679655    9227 cache.go:65] Caching tarball of preloaded images
	I1216 02:25:42.679810    9227 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1216 02:25:42.681473    9227 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1216 02:25:42.681490    9227 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1216 02:25:42.778021    9227 preload.go:295] Got checksum from GCS API "40ac2ac600e3e4b9dc7a3f8c6cb2ed91"
	I1216 02:25:42.778075    9227 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:40ac2ac600e3e4b9dc7a3f8c6cb2ed91 -> /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-850299 host does not exist
	  To start a cluster, run: "minikube start -p download-only-850299"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-850299
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (9.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-325050 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-325050 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.753543308s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (9.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1216 02:26:01.709137    8974 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1216 02:26:01.709167    8974 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-325050
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-325050: exit status 85 (67.633279ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-356549 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-356549 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-356549                                                                                                                                                        │ download-only-356549 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ start   │ -o=json --download-only -p download-only-850299 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio        │ download-only-850299 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-850299                                                                                                                                                        │ download-only-850299 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │ 16 Dec 25 02:25 UTC │
	│ start   │ -o=json --download-only -p download-only-325050 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-325050 │ jenkins │ v1.37.0 │ 16 Dec 25 02:25 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/16 02:25:52
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 02:25:52.005060    9423 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:25:52.005150    9423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:25:52.005161    9423 out.go:374] Setting ErrFile to fd 2...
	I1216 02:25:52.005168    9423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:25:52.005370    9423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 02:25:52.005829    9423 out.go:368] Setting JSON to true
	I1216 02:25:52.006672    9423 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":497,"bootTime":1765851455,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:25:52.006720    9423 start.go:143] virtualization: kvm guest
	I1216 02:25:52.008503    9423 out.go:99] [download-only-325050] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 02:25:52.008614    9423 notify.go:221] Checking for updates...
	I1216 02:25:52.010732    9423 out.go:171] MINIKUBE_LOCATION=22158
	I1216 02:25:52.011908    9423 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:25:52.013107    9423 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 02:25:52.014212    9423 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 02:25:52.015210    9423 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1216 02:25:52.017148    9423 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 02:25:52.017327    9423 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:25:52.045082    9423 out.go:99] Using the kvm2 driver based on user configuration
	I1216 02:25:52.045112    9423 start.go:309] selected driver: kvm2
	I1216 02:25:52.045119    9423 start.go:927] validating driver "kvm2" against <nil>
	I1216 02:25:52.045412    9423 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1216 02:25:52.045880    9423 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1216 02:25:52.046034    9423 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 02:25:52.046057    9423 cni.go:84] Creating CNI manager for ""
	I1216 02:25:52.046100    9423 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1216 02:25:52.046109    9423 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1216 02:25:52.046151    9423 start.go:353] cluster config:
	{Name:download-only-325050 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-325050 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:25:52.046259    9423 iso.go:125] acquiring lock: {Name:mk055aa36b1051bc664b283a8a6fb2af4db94c44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 02:25:52.047564    9423 out.go:99] Starting "download-only-325050" primary control-plane node in "download-only-325050" cluster
	I1216 02:25:52.047587    9423 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 02:25:52.507227    9423 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I1216 02:25:52.507264    9423 cache.go:65] Caching tarball of preloaded images
	I1216 02:25:52.507454    9423 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1216 02:25:52.509236    9423 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1216 02:25:52.509262    9423 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1216 02:25:52.605386    9423 preload.go:295] Got checksum from GCS API "b4861df7675d96066744278d08e2cd35"
	I1216 02:25:52.605426    9423 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:b4861df7675d96066744278d08e2cd35 -> /home/jenkins/minikube-integration/22158-5036/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-325050 host does not exist
	  To start a cluster, run: "minikube start -p download-only-325050"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-325050
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1216 02:26:02.468588    8974 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-911494 --alsologtostderr --binary-mirror http://127.0.0.1:37719 --driver=kvm2  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-911494" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-911494
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (102.95s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-187774 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-187774 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m41.863357098s)
helpers_test.go:176: Cleaning up "offline-crio-187774" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-187774
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-187774: (1.084310044s)
--- PASS: TestOffline (102.95s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-703051
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-703051: exit status 85 (66.635009ms)

                                                
                                                
-- stdout --
	* Profile "addons-703051" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-703051"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-703051
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-703051: exit status 85 (65.363675ms)

                                                
                                                
-- stdout --
	* Profile "addons-703051" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-703051"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (125.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-703051 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-703051 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m5.269312877s)
--- PASS: TestAddons/Setup (125.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-703051 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-703051 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-703051 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-703051 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [58286223-3023-49e8-8c96-fbc4885799ab] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [58286223-3023-49e8-8c96-fbc4885799ab] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004396708s
addons_test.go:696: (dbg) Run:  kubectl --context addons-703051 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-703051 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-703051 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 8.677628ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-l9ptj" [96cdab4e-1722-4bce-87dc-d0c270e803a6] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005361939s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-qx2bk" [ceaffdc5-fb32-4337-a3c4-e6a2a1d6a2b2] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003772332s
addons_test.go:394: (dbg) Run:  kubectl --context addons-703051 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-703051 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-703051 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.676488285s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 ip
2025/12/16 02:28:44 [DEBUG] GET http://192.168.39.237:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.44s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 5.36605ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-703051
addons_test.go:334: (dbg) Run:  kubectl --context addons-703051 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-t26lb" [c92b8090-ae54-4ce1-81da-6244c619b46a] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004204902s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-703051 addons disable inspektor-gadget --alsologtostderr -v=1: (5.657920368s)
--- PASS: TestAddons/parallel/InspektorGadget (10.66s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 10.380781ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-f4xbr" [972a1533-af9a-480f-a4fb-80c6f4653290] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003939839s
addons_test.go:465: (dbg) Run:  kubectl --context addons-703051 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1216 02:28:44.615778    8974 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1216 02:28:44.620226    8974 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1216 02:28:44.620251    8974 kapi.go:107] duration metric: took 4.485056ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 4.497523ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-703051 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-703051 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [c41f5f09-b865-4f4e-97b3-0b0b689548ff] Pending
helpers_test.go:353: "task-pv-pod" [c41f5f09-b865-4f4e-97b3-0b0b689548ff] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [c41f5f09-b865-4f4e-97b3-0b0b689548ff] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004269849s
addons_test.go:574: (dbg) Run:  kubectl --context addons-703051 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-703051 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-703051 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-703051 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-703051 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-703051 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-703051 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [47f8af21-40d8-489f-a354-08b208876548] Pending
helpers_test.go:353: "task-pv-pod-restore" [47f8af21-40d8-489f-a354-08b208876548] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003048666s
addons_test.go:616: (dbg) Run:  kubectl --context addons-703051 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-703051 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-703051 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-703051 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.834288115s)
--- PASS: TestAddons/parallel/CSI (61.63s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-703051 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-mg79z" [f68888a3-71d9-424a-a77b-47c8a535bb44] Pending
helpers_test.go:353: "headlamp-dfcdc64b-mg79z" [f68888a3-71d9-424a-a77b-47c8a535bb44] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-mg79z" [f68888a3-71d9-424a-a77b-47c8a535bb44] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.006119787s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-703051 addons disable headlamp --alsologtostderr -v=1: (5.933051957s)
--- PASS: TestAddons/parallel/Headlamp (21.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-49c7f" [1a2e4db6-98d4-424f-8529-eae6ea23933c] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003371308s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (14.25s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-703051 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-703051 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-703051 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [22e75530-2d20-48b5-a745-2650f8da082c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [22e75530-2d20-48b5-a745-2650f8da082c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [22e75530-2d20-48b5-a745-2650f8da082c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003524558s
addons_test.go:969: (dbg) Run:  kubectl --context addons-703051 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 ssh "cat /opt/local-path-provisioner/pvc-f9648a3b-9c51-449d-b8e4-4a857e52bcbe_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-703051 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-703051 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (14.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-dj88n" [aba0db89-f004-4cbb-880e-fda531ad78c4] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003792069s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.70s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-t8hfv" [d828ad42-de1b-4480-891b-96fc577b07ee] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003431855s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-703051 addons disable yakd --alsologtostderr -v=1: (5.709778267s)
--- PASS: TestAddons/parallel/Yakd (11.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (88.49s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-703051
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-703051: (1m28.301561195s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-703051
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-703051
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-703051
--- PASS: TestAddons/StoppedEnableDisable (88.49s)

                                                
                                    
x
+
TestCertOptions (40.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-972236 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1216 03:30:49.064870    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-972236 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (38.826687571s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-972236 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-972236 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-972236 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-972236" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-972236
--- PASS: TestCertOptions (40.11s)

                                                
                                    
x
+
TestCertExpiration (258.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-121062 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-121062 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (42.111169834s)
E1216 03:30:32.136252    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-121062 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-121062 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (35.108818197s)
helpers_test.go:176: Cleaning up "cert-expiration-121062" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-121062
--- PASS: TestCertExpiration (258.10s)

                                                
                                    
x
+
TestForceSystemdFlag (60.78s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-103596 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-103596 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.692162533s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-103596 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-103596" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-103596
--- PASS: TestForceSystemdFlag (60.78s)

                                                
                                    
x
+
TestForceSystemdEnv (54.2s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-050892 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-050892 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (53.340891503s)
helpers_test.go:176: Cleaning up "force-systemd-env-050892" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-050892
--- PASS: TestForceSystemdEnv (54.20s)

                                                
                                    
x
+
TestErrorSpam/setup (35.03s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-421871 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-421871 --driver=kvm2  --container-runtime=crio
E1216 02:33:09.607753    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:33:09.614134    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:33:09.625495    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:33:09.646833    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:33:09.688195    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:33:09.769670    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:33:09.931279    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:33:10.253006    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:33:10.894827    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:33:12.176187    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:33:14.737915    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:33:19.860529    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-421871 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-421871 --driver=kvm2  --container-runtime=crio: (35.024908004s)
--- PASS: TestErrorSpam/setup (35.03s)

                                                
                                    
x
+
TestErrorSpam/start (0.31s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 start --dry-run
--- PASS: TestErrorSpam/start (0.31s)

                                                
                                    
x
+
TestErrorSpam/status (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 status
--- PASS: TestErrorSpam/status (0.62s)

                                                
                                    
x
+
TestErrorSpam/pause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 pause
E1216 02:33:30.102707    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestErrorSpam/pause (1.44s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (5.08s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 stop: (1.949503014s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 stop: (1.486596548s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-421871 --log_dir /tmp/nospam-421871 stop: (1.64385994s)
--- PASS: TestErrorSpam/stop (5.08s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/test/nested/copy/8974/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.16s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-660584 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1216 02:33:50.584192    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-660584 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (49.155531941s)
--- PASS: TestFunctional/serial/StartWithProxy (49.16s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1216 02:34:26.576124    8974 config.go:182] Loaded profile config "functional-660584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-660584 --alsologtostderr -v=8
E1216 02:34:31.547098    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-660584 --alsologtostderr -v=8: (33.350836353s)
functional_test.go:678: soft start took 33.351478649s for "functional-660584" cluster.
I1216 02:34:59.927257    8974 config.go:182] Loaded profile config "functional-660584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (33.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-660584 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-660584 cache add registry.k8s.io/pause:3.1: (1.070704496s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-660584 cache add registry.k8s.io/pause:3.3: (1.173857479s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-660584 cache add registry.k8s.io/pause:latest: (1.088123165s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-660584 /tmp/TestFunctionalserialCacheCmdcacheadd_local292108670/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 cache add minikube-local-cache-test:functional-660584
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-660584 cache add minikube-local-cache-test:functional-660584: (1.765620572s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 cache delete minikube-local-cache-test:functional-660584
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-660584
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660584 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (159.661844ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 kubectl -- --context functional-660584 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-660584 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-660584 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-660584 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.912149478s)
functional_test.go:776: restart took 34.912269707s for "functional-660584" cluster.
I1216 02:35:42.326713    8974 config.go:182] Loaded profile config "functional-660584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (34.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-660584 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-660584 logs: (1.215647487s)
--- PASS: TestFunctional/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 logs --file /tmp/TestFunctionalserialLogsFileCmd1363276628/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-660584 logs --file /tmp/TestFunctionalserialLogsFileCmd1363276628/001/logs.txt: (1.188302316s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-660584 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-660584
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-660584: exit status 115 (214.960906ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.215:30156 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-660584 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660584 config get cpus: exit status 14 (74.310753ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660584 config get cpus: exit status 14 (61.347327ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-660584 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-660584 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 14713: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.75s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-660584 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-660584 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (112.198597ms)

                                                
                                                
-- stdout --
	* [functional-660584] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:35:50.531729   14609 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:35:50.531814   14609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:35:50.531825   14609 out.go:374] Setting ErrFile to fd 2...
	I1216 02:35:50.531832   14609 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:35:50.532091   14609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 02:35:50.532468   14609 out.go:368] Setting JSON to false
	I1216 02:35:50.533360   14609 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1095,"bootTime":1765851455,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:35:50.533412   14609 start.go:143] virtualization: kvm guest
	I1216 02:35:50.535142   14609 out.go:179] * [functional-660584] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 02:35:50.536597   14609 notify.go:221] Checking for updates...
	I1216 02:35:50.536621   14609 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:35:50.537601   14609 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:35:50.538685   14609 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 02:35:50.539691   14609 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 02:35:50.540665   14609 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 02:35:50.541544   14609 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:35:50.542787   14609 config.go:182] Loaded profile config "functional-660584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:35:50.543273   14609 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:35:50.575552   14609 out.go:179] * Using the kvm2 driver based on existing profile
	I1216 02:35:50.576542   14609 start.go:309] selected driver: kvm2
	I1216 02:35:50.576555   14609 start.go:927] validating driver "kvm2" against &{Name:functional-660584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-660584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:35:50.576649   14609 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:35:50.578328   14609 out.go:203] 
	W1216 02:35:50.579354   14609 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 02:35:50.580393   14609 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-660584 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-660584 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-660584 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (133.352458ms)

                                                
                                                
-- stdout --
	* [functional-660584] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:35:50.408529   14573 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:35:50.408647   14573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:35:50.408659   14573 out.go:374] Setting ErrFile to fd 2...
	I1216 02:35:50.408665   14573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:35:50.409171   14573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 02:35:50.409773   14573 out.go:368] Setting JSON to false
	I1216 02:35:50.411029   14573 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1095,"bootTime":1765851455,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:35:50.411120   14573 start.go:143] virtualization: kvm guest
	I1216 02:35:50.416413   14573 out.go:179] * [functional-660584] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1216 02:35:50.417792   14573 notify.go:221] Checking for updates...
	I1216 02:35:50.417802   14573 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:35:50.418776   14573 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:35:50.420250   14573 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 02:35:50.422305   14573 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 02:35:50.426291   14573 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 02:35:50.427708   14573 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:35:50.429172   14573 config.go:182] Loaded profile config "functional-660584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:35:50.429752   14573 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:35:50.462939   14573 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1216 02:35:50.463976   14573 start.go:309] selected driver: kvm2
	I1216 02:35:50.463994   14573 start.go:927] validating driver "kvm2" against &{Name:functional-660584 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.2 ClusterName:functional-660584 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:35:50.464116   14573 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:35:50.465820   14573 out.go:203] 
	W1216 02:35:50.466753   14573 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 02:35:50.467745   14573 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-660584 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-660584 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-q9mhq" [72c2e2e4-7b33-4ad6-82aa-b419d9f9e674] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-q9mhq" [72c2e2e4-7b33-4ad6-82aa-b419d9f9e674] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.005519359s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.215:30798
functional_test.go:1680: http://192.168.39.215:30798: success! body:
Request served by hello-node-connect-7d85dfc575-q9mhq

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.215:30798
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.52s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [f9591bc1-0470-4801-ad23-2a0fffde2fb2] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005992212s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-660584 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-660584 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-660584 get pvc myclaim -o=json
I1216 02:36:04.226900    8974 retry.go:31] will retry after 1.318144611s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:925a8652-e9ac-43b9-9f4a-be65ecea20d6 ResourceVersion:815 Generation:0 CreationTimestamp:2025-12-16 02:36:04 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName:pvc-925a8652-e9ac-43b9-9f4a-be65ecea20d6 StorageClassName:0xc001d05770 VolumeMode:0xc001d05780 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-660584 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-660584 apply -f testdata/storage-provisioner/pod.yaml
I1216 02:36:05.719570    8974 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [95eca8a8-8cbb-4b57-9a73-5a5d348e24ad] Pending
helpers_test.go:353: "sp-pod" [95eca8a8-8cbb-4b57-9a73-5a5d348e24ad] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [95eca8a8-8cbb-4b57-9a73-5a5d348e24ad] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.004816944s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-660584 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-660584 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-660584 apply -f testdata/storage-provisioner/pod.yaml
I1216 02:36:32.782584    8974 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [73d82d3c-4356-477e-a774-0ffe2a3922e8] Pending
helpers_test.go:353: "sp-pod" [73d82d3c-4356-477e-a774-0ffe2a3922e8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00405372s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-660584 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.96s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh -n functional-660584 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 cp functional-660584:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2252982967/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh -n functional-660584 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh -n functional-660584 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-660584 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
2025/12/16 02:36:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:353: "mysql-6bcdcbc558-g4xxf" [8f9182f6-a4d1-4bac-8ef4-af24450b5de9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-g4xxf" [8f9182f6-a4d1-4bac-8ef4-af24450b5de9] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.006226253s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-660584 exec mysql-6bcdcbc558-g4xxf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-660584 exec mysql-6bcdcbc558-g4xxf -- mysql -ppassword -e "show databases;": exit status 1 (163.535823ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 02:36:27.025055    8974 retry.go:31] will retry after 661.778309ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-660584 exec mysql-6bcdcbc558-g4xxf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-660584 exec mysql-6bcdcbc558-g4xxf -- mysql -ppassword -e "show databases;": exit status 1 (138.226479ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 02:36:27.826396    8974 retry.go:31] will retry after 1.287948968s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-660584 exec mysql-6bcdcbc558-g4xxf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-660584 exec mysql-6bcdcbc558-g4xxf -- mysql -ppassword -e "show databases;": exit status 1 (220.584828ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 02:36:29.335547    8974 retry.go:31] will retry after 2.731976758s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-660584 exec mysql-6bcdcbc558-g4xxf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-660584 exec mysql-6bcdcbc558-g4xxf -- mysql -ppassword -e "show databases;": exit status 1 (155.261127ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 02:36:32.223751    8974 retry.go:31] will retry after 2.70079441s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-660584 exec mysql-6bcdcbc558-g4xxf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.37s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8974/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "sudo cat /etc/test/nested/copy/8974/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8974.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "sudo cat /etc/ssl/certs/8974.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8974.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "sudo cat /usr/share/ca-certificates/8974.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/89742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "sudo cat /etc/ssl/certs/89742.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/89742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "sudo cat /usr/share/ca-certificates/89742.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-660584 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660584 ssh "sudo systemctl is-active docker": exit status 1 (170.589324ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660584 ssh "sudo systemctl is-active containerd": exit status 1 (204.668575ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-660584 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-660584 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-mkjwv" [2a13f2cb-eb91-4f23-af8b-b6ae2ff461c4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-mkjwv" [2a13f2cb-eb91-4f23-af8b-b6ae2ff461c4] Running
E1216 02:35:53.469076    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.007495557s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "383.514434ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "57.749404ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-660584 /tmp/TestFunctionalparallelMountCmdany-port2939000126/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765852549284757640" to /tmp/TestFunctionalparallelMountCmdany-port2939000126/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765852549284757640" to /tmp/TestFunctionalparallelMountCmdany-port2939000126/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765852549284757640" to /tmp/TestFunctionalparallelMountCmdany-port2939000126/001/test-1765852549284757640
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660584 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (209.087501ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 02:35:49.494208    8974 retry.go:31] will retry after 558.431176ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 02:35 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 02:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 02:35 test-1765852549284757640
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh cat /mount-9p/test-1765852549284757640
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-660584 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [a85e2857-f7fc-4eba-8740-a116246e714e] Pending
helpers_test.go:353: "busybox-mount" [a85e2857-f7fc-4eba-8740-a116246e714e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [a85e2857-f7fc-4eba-8740-a116246e714e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [a85e2857-f7fc-4eba-8740-a116246e714e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.011451148s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-660584 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-660584 /tmp/TestFunctionalparallelMountCmdany-port2939000126/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "242.025284ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "55.527085ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-660584 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-660584
localhost/kicbase/echo-server:functional-660584
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-660584 image ls --format short --alsologtostderr:
I1216 02:36:03.984048   15372 out.go:360] Setting OutFile to fd 1 ...
I1216 02:36:03.984161   15372 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:03.984171   15372 out.go:374] Setting ErrFile to fd 2...
I1216 02:36:03.984178   15372 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:03.984480   15372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
I1216 02:36:03.985266   15372 config.go:182] Loaded profile config "functional-660584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:36:03.985411   15372 config.go:182] Loaded profile config "functional-660584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:36:03.987774   15372 ssh_runner.go:195] Run: systemctl --version
I1216 02:36:03.990159   15372 main.go:143] libmachine: domain functional-660584 has defined MAC address 52:54:00:27:96:0a in network mk-functional-660584
I1216 02:36:03.990582   15372 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:96:0a", ip: ""} in network mk-functional-660584: {Iface:virbr1 ExpiryTime:2025-12-16 03:33:51 +0000 UTC Type:0 Mac:52:54:00:27:96:0a Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:functional-660584 Clientid:01:52:54:00:27:96:0a}
I1216 02:36:03.990615   15372 main.go:143] libmachine: domain functional-660584 has defined IP address 192.168.39.215 and MAC address 52:54:00:27:96:0a in network mk-functional-660584
I1216 02:36:03.990766   15372 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/functional-660584/id_rsa Username:docker}
I1216 02:36:04.072217   15372 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-660584 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 88320b5498ff2 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/minikube-local-cache-test     │ functional-660584  │ 79bf8780f2502 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 01e8bacf0f500 │ 76MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-660584  │ 0f58510660d0b │ 1.47MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ a5f569d49a979 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 8aa150647e88a │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-660584  │ 9056ab77afb8e │ 4.94MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-660584 image ls --format table --alsologtostderr:
I1216 02:36:08.370688   15504 out.go:360] Setting OutFile to fd 1 ...
I1216 02:36:08.371054   15504 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:08.371070   15504 out.go:374] Setting ErrFile to fd 2...
I1216 02:36:08.371077   15504 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:08.371352   15504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
I1216 02:36:08.372176   15504 config.go:182] Loaded profile config "functional-660584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:36:08.372343   15504 config.go:182] Loaded profile config "functional-660584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:36:08.374965   15504 ssh_runner.go:195] Run: systemctl --version
I1216 02:36:08.377444   15504 main.go:143] libmachine: domain functional-660584 has defined MAC address 52:54:00:27:96:0a in network mk-functional-660584
I1216 02:36:08.377981   15504 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:96:0a", ip: ""} in network mk-functional-660584: {Iface:virbr1 ExpiryTime:2025-12-16 03:33:51 +0000 UTC Type:0 Mac:52:54:00:27:96:0a Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:functional-660584 Clientid:01:52:54:00:27:96:0a}
I1216 02:36:08.378019   15504 main.go:143] libmachine: domain functional-660584 has defined IP address 192.168.39.215 and MAC address 52:54:00:27:96:0a in network mk-functional-660584
I1216 02:36:08.378234   15504 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/functional-660584/id_rsa Username:docker}
I1216 02:36:08.467495   15504 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-660584 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a94
9a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"79bf8780f250285809115c22b367d45b13a20d3eb0a84e4efc01d84b1f115556","repoDigests":["localhost/minikube-local-cache-test@sha256:4a2e5fa969107ad385e3163d1892f1be0a01543cc50a3441fb6bd6a16e630365"],"repoTags":["localhost/minikube-local-cache-test:functional-660584"],"size":"3330"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":["registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077","registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"89046001"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":["registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6","registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9
aa0051cc68ed281a4c26dda"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"53848919"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kube
rnetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"ab329436ec474a4858dd2371fb4f1a5428a0d3243c151943808e5b52ff04e45c","repoDigests":["docker.io/library/ae7acab45b6c98e028c2abdb9b19a4cebb8d5e5941590755cb85a31fc438cbe5-tmp@sha256:3817ffb3e029b724169532589c9a19d843eac7d0d37dc567ce81ba501aeef7ae"],"repoTags":[],"size":"1466018"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb","registry.k8s.io/kube-controller-manager@sha256
:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"76004183"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b78
0fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-660584"],"size":"4944818"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"0f58510660d0b8209694c58d7e4892f4fe21ed4a4a3bece835800a51a8e2de36","repoDigests":["localhost/my-image@sha25
6:348134aceadbbe08ff8f6f49834f8e16b6808e7bccbf2bb12fe2de88b41eb00f"],"repoTags":["localhost/my-image:functional-660584"],"size":"1468600"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":["registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"73145240"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-660584 image ls --format json --alsologtostderr:
I1216 02:36:08.183503   15493 out.go:360] Setting OutFile to fd 1 ...
I1216 02:36:08.183756   15493 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:08.183765   15493 out.go:374] Setting ErrFile to fd 2...
I1216 02:36:08.183770   15493 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:08.183941   15493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
I1216 02:36:08.184460   15493 config.go:182] Loaded profile config "functional-660584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:36:08.184544   15493 config.go:182] Loaded profile config "functional-660584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:36:08.186538   15493 ssh_runner.go:195] Run: systemctl --version
I1216 02:36:08.188591   15493 main.go:143] libmachine: domain functional-660584 has defined MAC address 52:54:00:27:96:0a in network mk-functional-660584
I1216 02:36:08.188956   15493 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:96:0a", ip: ""} in network mk-functional-660584: {Iface:virbr1 ExpiryTime:2025-12-16 03:33:51 +0000 UTC Type:0 Mac:52:54:00:27:96:0a Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:functional-660584 Clientid:01:52:54:00:27:96:0a}
I1216 02:36:08.188990   15493 main.go:143] libmachine: domain functional-660584 has defined IP address 192.168.39.215 and MAC address 52:54:00:27:96:0a in network mk-functional-660584
I1216 02:36:08.189114   15493 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/functional-660584/id_rsa Username:docker}
I1216 02:36:08.272003   15493 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-660584 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1512fa1bace72d9bcaa7471e364e972c60805474184840a707b6afa05bde3a74
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "73145240"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 79bf8780f250285809115c22b367d45b13a20d3eb0a84e4efc01d84b1f115556
repoDigests:
- localhost/minikube-local-cache-test@sha256:4a2e5fa969107ad385e3163d1892f1be0a01543cc50a3441fb6bd6a16e630365
repoTags:
- localhost/minikube-local-cache-test:functional-660584
size: "3330"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
- registry.k8s.io/kube-apiserver@sha256:f0e0dc00029af1a9258587ef181f17a9eb7605d3d69a72668f4f6709f72005fd
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "89046001"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
- registry.k8s.io/kube-scheduler@sha256:7a0dd12264041dec5dcbb44eeaad051d21560c6d9aa0051cc68ed281a4c26dda
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "53848919"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-660584
size: "4944818"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
- registry.k8s.io/kube-controller-manager@sha256:9eb769377f8fdeab9e1428194e2b7d19584b63a5fda8f2f406900ee7893c2f4e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "76004183"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-660584 image ls --format yaml --alsologtostderr:
I1216 02:36:04.178619   15403 out.go:360] Setting OutFile to fd 1 ...
I1216 02:36:04.178976   15403 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:04.178996   15403 out.go:374] Setting ErrFile to fd 2...
I1216 02:36:04.179003   15403 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:04.179323   15403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
I1216 02:36:04.180250   15403 config.go:182] Loaded profile config "functional-660584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:36:04.180406   15403 config.go:182] Loaded profile config "functional-660584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:36:04.183230   15403 ssh_runner.go:195] Run: systemctl --version
I1216 02:36:04.186282   15403 main.go:143] libmachine: domain functional-660584 has defined MAC address 52:54:00:27:96:0a in network mk-functional-660584
I1216 02:36:04.186815   15403 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:96:0a", ip: ""} in network mk-functional-660584: {Iface:virbr1 ExpiryTime:2025-12-16 03:33:51 +0000 UTC Type:0 Mac:52:54:00:27:96:0a Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:functional-660584 Clientid:01:52:54:00:27:96:0a}
I1216 02:36:04.186858   15403 main.go:143] libmachine: domain functional-660584 has defined IP address 192.168.39.215 and MAC address 52:54:00:27:96:0a in network mk-functional-660584
I1216 02:36:04.187049   15403 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/functional-660584/id_rsa Username:docker}
I1216 02:36:04.282741   15403 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660584 ssh pgrep buildkitd: exit status 1 (173.051084ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image build -t localhost/my-image:functional-660584 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-660584 image build -t localhost/my-image:functional-660584 testdata/build --alsologtostderr: (3.450035824s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-660584 image build -t localhost/my-image:functional-660584 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ab329436ec4
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-660584
--> 0f58510660d
Successfully tagged localhost/my-image:functional-660584
0f58510660d0b8209694c58d7e4892f4fe21ed4a4a3bece835800a51a8e2de36
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-660584 image build -t localhost/my-image:functional-660584 testdata/build --alsologtostderr:
I1216 02:36:04.557368   15435 out.go:360] Setting OutFile to fd 1 ...
I1216 02:36:04.557648   15435 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:04.557659   15435 out.go:374] Setting ErrFile to fd 2...
I1216 02:36:04.557663   15435 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:36:04.557892   15435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
I1216 02:36:04.558464   15435 config.go:182] Loaded profile config "functional-660584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:36:04.559143   15435 config.go:182] Loaded profile config "functional-660584": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1216 02:36:04.561599   15435 ssh_runner.go:195] Run: systemctl --version
I1216 02:36:04.563904   15435 main.go:143] libmachine: domain functional-660584 has defined MAC address 52:54:00:27:96:0a in network mk-functional-660584
I1216 02:36:04.564345   15435 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:27:96:0a", ip: ""} in network mk-functional-660584: {Iface:virbr1 ExpiryTime:2025-12-16 03:33:51 +0000 UTC Type:0 Mac:52:54:00:27:96:0a Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:functional-660584 Clientid:01:52:54:00:27:96:0a}
I1216 02:36:04.564379   15435 main.go:143] libmachine: domain functional-660584 has defined IP address 192.168.39.215 and MAC address 52:54:00:27:96:0a in network mk-functional-660584
I1216 02:36:04.564522   15435 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/functional-660584/id_rsa Username:docker}
I1216 02:36:04.658412   15435 build_images.go:162] Building image from path: /tmp/build.2126364534.tar
I1216 02:36:04.658486   15435 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 02:36:04.681517   15435 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2126364534.tar
I1216 02:36:04.690944   15435 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2126364534.tar: stat -c "%s %y" /var/lib/minikube/build/build.2126364534.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2126364534.tar': No such file or directory
I1216 02:36:04.690980   15435 ssh_runner.go:362] scp /tmp/build.2126364534.tar --> /var/lib/minikube/build/build.2126364534.tar (3072 bytes)
I1216 02:36:04.755900   15435 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2126364534
I1216 02:36:04.773798   15435 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2126364534 -xf /var/lib/minikube/build/build.2126364534.tar
I1216 02:36:04.790464   15435 crio.go:315] Building image: /var/lib/minikube/build/build.2126364534
I1216 02:36:04.790534   15435 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-660584 /var/lib/minikube/build/build.2126364534 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1216 02:36:07.918190   15435 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-660584 /var/lib/minikube/build/build.2126364534 --cgroup-manager=cgroupfs: (3.127632603s)
I1216 02:36:07.918259   15435 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2126364534
I1216 02:36:07.933017   15435 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2126364534.tar
I1216 02:36:07.945721   15435 build_images.go:218] Built localhost/my-image:functional-660584 from /tmp/build.2126364534.tar
I1216 02:36:07.945758   15435 build_images.go:134] succeeded building to: functional-660584
I1216 02:36:07.945766   15435 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.708335237s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-660584
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image load --daemon kicbase/echo-server:functional-660584 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image load --daemon kicbase/echo-server:functional-660584 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-660584
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image load --daemon kicbase/echo-server:functional-660584 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image save kicbase/echo-server:functional-660584 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image rm kicbase/echo-server:functional-660584 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-660584
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 image save --daemon kicbase/echo-server:functional-660584 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-660584
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-660584 /tmp/TestFunctionalparallelMountCmdspecific-port791374047/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660584 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (212.970153ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 02:35:58.656480    8974 retry.go:31] will retry after 402.66494ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-660584 /tmp/TestFunctionalparallelMountCmdspecific-port791374047/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660584 ssh "sudo umount -f /mount-9p": exit status 1 (193.354557ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-660584 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-660584 /tmp/TestFunctionalparallelMountCmdspecific-port791374047/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 service list -o json
functional_test.go:1504: Took "498.982406ms" to run "out/minikube-linux-amd64 -p functional-660584 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.215:31545
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.215:31545
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-660584 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665412138/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-660584 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665412138/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-660584 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665412138/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-660584 ssh "findmnt -T" /mount1: exit status 1 (227.591434ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 02:36:00.067886    8974 retry.go:31] will retry after 331.053981ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-660584 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-660584 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665412138/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-660584 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665412138/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-660584 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3665412138/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-660584 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-660584
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-660584
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-660584
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22158-5036/.minikube/files/etc/test/nested/copy/8974/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (72.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668205 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-668205 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m12.326123853s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (72.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (374.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1216 02:37:52.271298    8974 config.go:182] Loaded profile config "functional-668205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668205 --alsologtostderr -v=8
E1216 02:38:09.599254    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:38:37.313393    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:49.065428    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:49.071875    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:49.083321    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:49.104776    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:49.146038    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:49.227882    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:49.389460    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:49.711362    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:50.353401    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:51.635123    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:54.197183    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:40:59.318491    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:41:09.560387    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:41:30.042437    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:42:11.004318    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:43:09.599247    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:43:32.927438    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-668205 --alsologtostderr -v=8: (6m14.082085437s)
functional_test.go:678: soft start took 6m14.082430696s for "functional-668205" cluster.
I1216 02:44:06.353737    8974 config.go:182] Loaded profile config "functional-668205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (374.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-668205 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-668205 cache add registry.k8s.io/pause:3.1: (1.110999812s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-668205 cache add registry.k8s.io/pause:3.3: (1.212978531s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-668205 cache add registry.k8s.io/pause:latest: (1.146478935s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-668205 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach1023724950/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 cache add minikube-local-cache-test:functional-668205
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-668205 cache add minikube-local-cache-test:functional-668205: (1.740555442s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 cache delete minikube-local-cache-test:functional-668205
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-668205
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (2.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668205 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (177.820254ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 kubectl -- --context functional-668205 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-668205 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (32.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668205 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-668205 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.625170802s)
functional_test.go:776: restart took 32.625283902s for "functional-668205" cluster.
I1216 02:44:46.706862    8974 config.go:182] Loaded profile config "functional-668205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (32.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-668205 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-668205 logs: (1.309511218s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3162241515/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-668205 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs3162241515/001/logs.txt: (1.307184339s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-668205 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-668205
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-668205: exit status 115 (226.291464ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.140:30486 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-668205 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-668205 delete -f testdata/invalidsvc.yaml: (1.291097688s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (4.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668205 config get cpus: exit status 14 (63.746528ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668205 config get cpus: exit status 14 (61.813073ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (14.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-668205 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-668205 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 18921: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (14.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668205 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-668205 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (100.099861ms)

                                                
                                                
-- stdout --
	* [functional-668205] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:45:23.693644   18877 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:45:23.693860   18877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:45:23.693868   18877 out.go:374] Setting ErrFile to fd 2...
	I1216 02:45:23.693872   18877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:45:23.694061   18877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 02:45:23.694439   18877 out.go:368] Setting JSON to false
	I1216 02:45:23.695201   18877 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1669,"bootTime":1765851455,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:45:23.695251   18877 start.go:143] virtualization: kvm guest
	I1216 02:45:23.696983   18877 out.go:179] * [functional-668205] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 02:45:23.698099   18877 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:45:23.698147   18877 notify.go:221] Checking for updates...
	I1216 02:45:23.700050   18877 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:45:23.700949   18877 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 02:45:23.701731   18877 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 02:45:23.703117   18877 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 02:45:23.704080   18877 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:45:23.705398   18877 config.go:182] Loaded profile config "functional-668205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 02:45:23.705815   18877 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:45:23.735044   18877 out.go:179] * Using the kvm2 driver based on existing profile
	I1216 02:45:23.736008   18877 start.go:309] selected driver: kvm2
	I1216 02:45:23.736019   18877 start.go:927] validating driver "kvm2" against &{Name:functional-668205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-668205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:45:23.736105   18877 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:45:23.737653   18877 out.go:203] 
	W1216 02:45:23.738645   18877 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 02:45:23.739644   18877 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668205 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-668205 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-668205 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (117.88788ms)

                                                
                                                
-- stdout --
	* [functional-668205] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:45:33.423989   19381 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:45:33.424122   19381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:45:33.424133   19381 out.go:374] Setting ErrFile to fd 2...
	I1216 02:45:33.424139   19381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:45:33.424553   19381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 02:45:33.425137   19381 out.go:368] Setting JSON to false
	I1216 02:45:33.426300   19381 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1678,"bootTime":1765851455,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 02:45:33.426381   19381 start.go:143] virtualization: kvm guest
	I1216 02:45:33.427787   19381 out.go:179] * [functional-668205] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1216 02:45:33.428916   19381 notify.go:221] Checking for updates...
	I1216 02:45:33.428935   19381 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 02:45:33.429887   19381 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 02:45:33.431020   19381 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 02:45:33.432184   19381 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 02:45:33.433161   19381 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 02:45:33.434187   19381 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 02:45:33.435966   19381 config.go:182] Loaded profile config "functional-668205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 02:45:33.436574   19381 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 02:45:33.466059   19381 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1216 02:45:33.467232   19381 start.go:309] selected driver: kvm2
	I1216 02:45:33.467248   19381 start.go:927] validating driver "kvm2" against &{Name:functional-668205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22158/minikube-v1.37.0-1765836331-22158-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765575274-22117@sha256:47728bbc099e81c562059898613d7210c388d2eec3b98cd9603df2bbe9af09cb Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-beta.0 ClusterName:functional-668205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 02:45:33.467361   19381 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 02:45:33.469211   19381 out.go:203] 
	W1216 02:45:33.470224   19381 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 02:45:33.471188   19381 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (0.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (25.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-668205 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-668205 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-8qwbf" [6419bc2e-52ed-4e17-83b1-3e7659276b2d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-8qwbf" [6419bc2e-52ed-4e17-83b1-3e7659276b2d] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 25.005383476s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.140:31292
functional_test.go:1680: http://192.168.39.140:31292: success! body:
Request served by hello-node-connect-9f67c86d4-8qwbf

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.140:31292
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (25.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (42.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [b9f8a8db-5902-47d0-adb8-09eab26620cf] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003744755s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-668205 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-668205 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-668205 get pvc myclaim -o=json
I1216 02:45:01.238847    8974 retry.go:31] will retry after 1.440636973s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:6d14cc87-cbfc-46b3-a802-81335648f88c ResourceVersion:621 Generation:0 CreationTimestamp:2025-12-16 02:45:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001cdd720 VolumeMode:0xc001cdd730 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-668205 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-668205 apply -f testdata/storage-provisioner/pod.yaml
I1216 02:45:02.884395    8974 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [0dbe958a-f531-4b55-9441-b6cae1f8f1f4] Pending
helpers_test.go:353: "sp-pod" [0dbe958a-f531-4b55-9441-b6cae1f8f1f4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [0dbe958a-f531-4b55-9441-b6cae1f8f1f4] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.006214086s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-668205 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-668205 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-668205 apply -f testdata/storage-provisioner/pod.yaml
I1216 02:45:29.881865    8974 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [4ee02ac3-078e-40f5-84ee-f672d6edda0b] Pending
helpers_test.go:353: "sp-pod" [4ee02ac3-078e-40f5-84ee-f672d6edda0b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [4ee02ac3-078e-40f5-84ee-f672d6edda0b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.006629889s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-668205 exec sp-pod -- ls /tmp/mount
2025/12/16 02:45:38 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (42.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh -n functional-668205 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 cp functional-668205:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp4032118302/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh -n functional-668205 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh -n functional-668205 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (1.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (31.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-668205 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-n82p9" [3be59982-e94b-43b3-9d66-f0b7a2713f47] Pending
helpers_test.go:353: "mysql-7d7b65bc95-n82p9" [3be59982-e94b-43b3-9d66-f0b7a2713f47] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-n82p9" [3be59982-e94b-43b3-9d66-f0b7a2713f47] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL: app=mysql healthy within 25.006080619s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-668205 exec mysql-7d7b65bc95-n82p9 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-668205 exec mysql-7d7b65bc95-n82p9 -- mysql -ppassword -e "show databases;": exit status 1 (176.92141ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 02:45:19.574570    8974 retry.go:31] will retry after 805.33323ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-668205 exec mysql-7d7b65bc95-n82p9 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-668205 exec mysql-7d7b65bc95-n82p9 -- mysql -ppassword -e "show databases;": exit status 1 (184.813962ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 02:45:20.566065    8974 retry.go:31] will retry after 1.988321804s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-668205 exec mysql-7d7b65bc95-n82p9 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-668205 exec mysql-7d7b65bc95-n82p9 -- mysql -ppassword -e "show databases;": exit status 1 (282.139245ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1216 02:45:22.837352    8974 retry.go:31] will retry after 2.241680175s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-668205 exec mysql-7d7b65bc95-n82p9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (31.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8974/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "sudo cat /etc/test/nested/copy/8974/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8974.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "sudo cat /etc/ssl/certs/8974.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8974.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "sudo cat /usr/share/ca-certificates/8974.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/89742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "sudo cat /etc/ssl/certs/89742.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/89742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "sudo cat /usr/share/ca-certificates/89742.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-668205 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668205 ssh "sudo systemctl is-active docker": exit status 1 (156.568989ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668205 ssh "sudo systemctl is-active containerd": exit status 1 (153.628915ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (26.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-668205 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-668205 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-wls9w" [54afacd6-11b2-4e5a-96b2-f20657dd5dee] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-wls9w" [54afacd6-11b2-4e5a-96b2-f20657dd5dee] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 26.007351768s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (26.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "343.717433ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.625901ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 service list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "323.750711ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "67.956403ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 service list -o json
functional_test.go:1504: Took "540.364121ms" to run "out/minikube-linux-amd64 -p functional-668205 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-668205 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2549872245/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765853122158399343" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2549872245/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765853122158399343" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2549872245/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765853122158399343" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2549872245/001/test-1765853122158399343
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668205 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (217.646306ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 02:45:22.376401    8974 retry.go:31] will retry after 519.369924ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 02:45 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 02:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 02:45 test-1765853122158399343
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh cat /mount-9p/test-1765853122158399343
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-668205 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [38ba2a51-a445-4f4c-ae38-0d1d97ecf0f6] Pending
helpers_test.go:353: "busybox-mount" [38ba2a51-a445-4f4c-ae38-0d1d97ecf0f6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [38ba2a51-a445-4f4c-ae38-0d1d97ecf0f6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [38ba2a51-a445-4f4c-ae38-0d1d97ecf0f6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.00606885s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-668205 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668205 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo2549872245/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (8.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.140:31076
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.140:31076
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-668205 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-668205
localhost/kicbase/echo-server:functional-668205
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-668205 image ls --format short --alsologtostderr:
I1216 02:45:35.802574   19514 out.go:360] Setting OutFile to fd 1 ...
I1216 02:45:35.802799   19514 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:45:35.802807   19514 out.go:374] Setting ErrFile to fd 2...
I1216 02:45:35.802811   19514 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:45:35.802982   19514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
I1216 02:45:35.803427   19514 config.go:182] Loaded profile config "functional-668205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:45:35.803508   19514 config.go:182] Loaded profile config "functional-668205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:45:35.805911   19514 ssh_runner.go:195] Run: systemctl --version
I1216 02:45:35.808705   19514 main.go:143] libmachine: domain functional-668205 has defined MAC address 52:54:00:00:a4:98 in network mk-functional-668205
I1216 02:45:35.809219   19514 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:00:a4:98", ip: ""} in network mk-functional-668205: {Iface:virbr1 ExpiryTime:2025-12-16 03:36:54 +0000 UTC Type:0 Mac:52:54:00:00:a4:98 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:functional-668205 Clientid:01:52:54:00:00:a4:98}
I1216 02:45:35.809254   19514 main.go:143] libmachine: domain functional-668205 has defined IP address 192.168.39.140 and MAC address 52:54:00:00:a4:98 in network mk-functional-668205
I1216 02:45:35.809423   19514 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/functional-668205/id_rsa Username:docker}
I1216 02:45:35.902519   19514 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-668205 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ public.ecr.aws/docker/library/mysql     │ 8.4                │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 45f3cc72d235f │ 76.9MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 8a4ded35a3eb1 │ 72MB   │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-668205  │ 79bf8780f2502 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ a3e246e9556e9 │ 63.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 7bb6219ddab95 │ 52.7MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ aa9d02839d8de │ 90.8MB │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-668205  │ 9056ab77afb8e │ 4.95MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-668205 image ls --format table --alsologtostderr:
I1216 02:45:36.005154   19535 out.go:360] Setting OutFile to fd 1 ...
I1216 02:45:36.005472   19535 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:45:36.005485   19535 out.go:374] Setting ErrFile to fd 2...
I1216 02:45:36.005492   19535 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:45:36.005766   19535 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
I1216 02:45:36.006542   19535 config.go:182] Loaded profile config "functional-668205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:45:36.006689   19535 config.go:182] Loaded profile config "functional-668205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:45:36.009322   19535 ssh_runner.go:195] Run: systemctl --version
I1216 02:45:36.012233   19535 main.go:143] libmachine: domain functional-668205 has defined MAC address 52:54:00:00:a4:98 in network mk-functional-668205
I1216 02:45:36.012676   19535 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:00:a4:98", ip: ""} in network mk-functional-668205: {Iface:virbr1 ExpiryTime:2025-12-16 03:36:54 +0000 UTC Type:0 Mac:52:54:00:00:a4:98 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:functional-668205 Clientid:01:52:54:00:00:a4:98}
I1216 02:45:36.012700   19535 main.go:143] libmachine: domain functional-668205 has defined IP address 192.168.39.140 and MAC address 52:54:00:00:a4:98 in network mk-functional-668205
I1216 02:45:36.012844   19535 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/functional-668205/id_rsa Username:docker}
I1216 02:45:36.094625   19535 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-668205 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-668205"],"size":"4945146"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b4610899694
49f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"90819569"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sh
a256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"79bf8780f250285809115c22b367d45b13a20d3eb0a84e4efc01d84b1f115556","repoDigests":["localhost/minikube-local-cache-test@sha256:4a2e5fa969107ad385e3163d1892f1b
e0a01543cc50a3441fb6bd6a16e630365"],"repoTags":["localhost/minikube-local-cache-test:functional-668205"],"size":"3330"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry
.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-c
ontroller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"76872535"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"52747095"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox
@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":["registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a","registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"71977881"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-668205 image ls --format json --alsologtostderr:
I1216 02:45:36.005791   19534 out.go:360] Setting OutFile to fd 1 ...
I1216 02:45:36.006070   19534 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:45:36.006081   19534 out.go:374] Setting ErrFile to fd 2...
I1216 02:45:36.006087   19534 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:45:36.006375   19534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
I1216 02:45:36.007028   19534 config.go:182] Loaded profile config "functional-668205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:45:36.007158   19534 config.go:182] Loaded profile config "functional-668205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:45:36.009589   19534 ssh_runner.go:195] Run: systemctl --version
I1216 02:45:36.012386   19534 main.go:143] libmachine: domain functional-668205 has defined MAC address 52:54:00:00:a4:98 in network mk-functional-668205
I1216 02:45:36.012806   19534 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:00:a4:98", ip: ""} in network mk-functional-668205: {Iface:virbr1 ExpiryTime:2025-12-16 03:36:54 +0000 UTC Type:0 Mac:52:54:00:00:a4:98 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:functional-668205 Clientid:01:52:54:00:00:a4:98}
I1216 02:45:36.012839   19534 main.go:143] libmachine: domain functional-668205 has defined IP address 192.168.39.140 and MAC address 52:54:00:00:a4:98 in network mk-functional-668205
I1216 02:45:36.013081   19534 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/functional-668205/id_rsa Username:docker}
I1216 02:45:36.097339   19534 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-668205 image ls --format yaml --alsologtostderr:
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:ca8b699e445178c1fc4a8f31245d6bd7bd97192cc7b43baa2360522e09b55581
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "76872535"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
- registry.k8s.io/kube-proxy@sha256:70a55889ba3d6b048529c8edae375ce2f20d1204f3bbcacd24e617abe8888b82
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "71977881"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:bb3d10b07de89c1e36a78794573fdbb7939a465d235a5bd164bae43aec22ee5b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "52747095"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 79bf8780f250285809115c22b367d45b13a20d3eb0a84e4efc01d84b1f115556
repoDigests:
- localhost/minikube-local-cache-test@sha256:4a2e5fa969107ad385e3163d1892f1be0a01543cc50a3441fb6bd6a16e630365
repoTags:
- localhost/minikube-local-cache-test:functional-668205
size: "3330"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:c95487a138f982d925eb8c59c7fc40761c58af445463ac4df872aee36c5e999c
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "90819569"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-668205
size: "4945146"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-668205 image ls --format yaml --alsologtostderr:
I1216 02:45:35.801893   19515 out.go:360] Setting OutFile to fd 1 ...
I1216 02:45:35.802160   19515 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:45:35.802169   19515 out.go:374] Setting ErrFile to fd 2...
I1216 02:45:35.802173   19515 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:45:35.802358   19515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
I1216 02:45:35.802898   19515 config.go:182] Loaded profile config "functional-668205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:45:35.803019   19515 config.go:182] Loaded profile config "functional-668205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:45:35.805222   19515 ssh_runner.go:195] Run: systemctl --version
I1216 02:45:35.808249   19515 main.go:143] libmachine: domain functional-668205 has defined MAC address 52:54:00:00:a4:98 in network mk-functional-668205
I1216 02:45:35.808629   19515 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:00:a4:98", ip: ""} in network mk-functional-668205: {Iface:virbr1 ExpiryTime:2025-12-16 03:36:54 +0000 UTC Type:0 Mac:52:54:00:00:a4:98 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:functional-668205 Clientid:01:52:54:00:00:a4:98}
I1216 02:45:35.808653   19515 main.go:143] libmachine: domain functional-668205 has defined IP address 192.168.39.140 and MAC address 52:54:00:00:a4:98 in network mk-functional-668205
I1216 02:45:35.808797   19515 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/functional-668205/id_rsa Username:docker}
I1216 02:45:35.897116   19515 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668205 ssh pgrep buildkitd: exit status 1 (152.322261ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image build -t localhost/my-image:functional-668205 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-668205 image build -t localhost/my-image:functional-668205 testdata/build --alsologtostderr: (3.036982144s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-668205 image build -t localhost/my-image:functional-668205 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b257735debb
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-668205
--> 79d6f22e4ad
Successfully tagged localhost/my-image:functional-668205
79d6f22e4ad0d92240fe4bc4d794ac97f47212bb92397f9748eccd537795bd18
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-668205 image build -t localhost/my-image:functional-668205 testdata/build --alsologtostderr:
I1216 02:45:36.348271   19567 out.go:360] Setting OutFile to fd 1 ...
I1216 02:45:36.348553   19567 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:45:36.348563   19567 out.go:374] Setting ErrFile to fd 2...
I1216 02:45:36.348568   19567 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1216 02:45:36.348757   19567 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
I1216 02:45:36.349320   19567 config.go:182] Loaded profile config "functional-668205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:45:36.350009   19567 config.go:182] Loaded profile config "functional-668205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1216 02:45:36.351991   19567 ssh_runner.go:195] Run: systemctl --version
I1216 02:45:36.353950   19567 main.go:143] libmachine: domain functional-668205 has defined MAC address 52:54:00:00:a4:98 in network mk-functional-668205
I1216 02:45:36.354281   19567 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:00:a4:98", ip: ""} in network mk-functional-668205: {Iface:virbr1 ExpiryTime:2025-12-16 03:36:54 +0000 UTC Type:0 Mac:52:54:00:00:a4:98 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:functional-668205 Clientid:01:52:54:00:00:a4:98}
I1216 02:45:36.354305   19567 main.go:143] libmachine: domain functional-668205 has defined IP address 192.168.39.140 and MAC address 52:54:00:00:a4:98 in network mk-functional-668205
I1216 02:45:36.354432   19567 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/functional-668205/id_rsa Username:docker}
I1216 02:45:36.445003   19567 build_images.go:162] Building image from path: /tmp/build.2837657848.tar
I1216 02:45:36.445067   19567 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 02:45:36.456837   19567 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2837657848.tar
I1216 02:45:36.461625   19567 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2837657848.tar: stat -c "%s %y" /var/lib/minikube/build/build.2837657848.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2837657848.tar': No such file or directory
I1216 02:45:36.461655   19567 ssh_runner.go:362] scp /tmp/build.2837657848.tar --> /var/lib/minikube/build/build.2837657848.tar (3072 bytes)
I1216 02:45:36.499629   19567 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2837657848
I1216 02:45:36.516728   19567 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2837657848 -xf /var/lib/minikube/build/build.2837657848.tar
I1216 02:45:36.531008   19567 crio.go:315] Building image: /var/lib/minikube/build/build.2837657848
I1216 02:45:36.531104   19567 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-668205 /var/lib/minikube/build/build.2837657848 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1216 02:45:39.301107   19567 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-668205 /var/lib/minikube/build/build.2837657848 --cgroup-manager=cgroupfs: (2.769975441s)
I1216 02:45:39.301186   19567 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2837657848
I1216 02:45:39.314587   19567 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2837657848.tar
I1216 02:45:39.326464   19567 build_images.go:218] Built localhost/my-image:functional-668205 from /tmp/build.2837657848.tar
I1216 02:45:39.326503   19567 build_images.go:134] succeeded building to: functional-668205
I1216 02:45:39.326509   19567 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-668205
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image load --daemon kicbase/echo-server:functional-668205 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image load --daemon kicbase/echo-server:functional-668205 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.87s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-668205
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image load --daemon kicbase/echo-server:functional-668205 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.87s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-668205 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo480010911/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668205 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (192.687556ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 02:45:30.595281    8974 retry.go:31] will retry after 341.933372ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668205 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo480010911/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668205 ssh "sudo umount -f /mount-9p": exit status 1 (196.991179ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-668205 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668205 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo480010911/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image save kicbase/echo-server:functional-668205 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-668205 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo205948286/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-668205 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo205948286/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-668205 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo205948286/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-668205 ssh "findmnt -T" /mount1: exit status 1 (258.500355ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 02:45:31.978444    8974 retry.go:31] will retry after 552.834847ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-668205 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668205 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo205948286/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668205 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo205948286/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-668205 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo205948286/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (2.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image rm kicbase/echo-server:functional-668205 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-668205 image rm kicbase/echo-server:functional-668205 --alsologtostderr: (2.426480399s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (2.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-668205
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-668205 image save --daemon kicbase/echo-server:functional-668205 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-668205
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-668205
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-668205
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-668205
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (190.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1216 02:45:49.064879    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:46:16.768815    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:48:09.598535    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-195596 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m9.877230946s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (190.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-195596 kubectl -- rollout status deployment/busybox: (4.251722024s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- exec busybox-7b57f96db7-22nb7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- exec busybox-7b57f96db7-rkqws -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- exec busybox-7b57f96db7-vwjmd -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- exec busybox-7b57f96db7-22nb7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- exec busybox-7b57f96db7-rkqws -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- exec busybox-7b57f96db7-vwjmd -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- exec busybox-7b57f96db7-22nb7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- exec busybox-7b57f96db7-rkqws -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- exec busybox-7b57f96db7-vwjmd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- exec busybox-7b57f96db7-22nb7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- exec busybox-7b57f96db7-22nb7 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- exec busybox-7b57f96db7-rkqws -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- exec busybox-7b57f96db7-rkqws -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- exec busybox-7b57f96db7-vwjmd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 kubectl -- exec busybox-7b57f96db7-vwjmd -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (43.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 node add --alsologtostderr -v 5
E1216 02:49:32.675191    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-195596 node add --alsologtostderr -v 5: (42.542404966s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (43.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-195596 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp testdata/cp-test.txt ha-195596:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp ha-195596:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1380205973/001/cp-test_ha-195596.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp ha-195596:/home/docker/cp-test.txt ha-195596-m02:/home/docker/cp-test_ha-195596_ha-195596-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m02 "sudo cat /home/docker/cp-test_ha-195596_ha-195596-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp ha-195596:/home/docker/cp-test.txt ha-195596-m03:/home/docker/cp-test_ha-195596_ha-195596-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m03 "sudo cat /home/docker/cp-test_ha-195596_ha-195596-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp ha-195596:/home/docker/cp-test.txt ha-195596-m04:/home/docker/cp-test_ha-195596_ha-195596-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m04 "sudo cat /home/docker/cp-test_ha-195596_ha-195596-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp testdata/cp-test.txt ha-195596-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp ha-195596-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1380205973/001/cp-test_ha-195596-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp ha-195596-m02:/home/docker/cp-test.txt ha-195596:/home/docker/cp-test_ha-195596-m02_ha-195596.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596 "sudo cat /home/docker/cp-test_ha-195596-m02_ha-195596.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp ha-195596-m02:/home/docker/cp-test.txt ha-195596-m03:/home/docker/cp-test_ha-195596-m02_ha-195596-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m03 "sudo cat /home/docker/cp-test_ha-195596-m02_ha-195596-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp ha-195596-m02:/home/docker/cp-test.txt ha-195596-m04:/home/docker/cp-test_ha-195596-m02_ha-195596-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m04 "sudo cat /home/docker/cp-test_ha-195596-m02_ha-195596-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp testdata/cp-test.txt ha-195596-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp ha-195596-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1380205973/001/cp-test_ha-195596-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp ha-195596-m03:/home/docker/cp-test.txt ha-195596:/home/docker/cp-test_ha-195596-m03_ha-195596.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596 "sudo cat /home/docker/cp-test_ha-195596-m03_ha-195596.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp ha-195596-m03:/home/docker/cp-test.txt ha-195596-m02:/home/docker/cp-test_ha-195596-m03_ha-195596-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m02 "sudo cat /home/docker/cp-test_ha-195596-m03_ha-195596-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp ha-195596-m03:/home/docker/cp-test.txt ha-195596-m04:/home/docker/cp-test_ha-195596-m03_ha-195596-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m04 "sudo cat /home/docker/cp-test_ha-195596-m03_ha-195596-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp testdata/cp-test.txt ha-195596-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp ha-195596-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1380205973/001/cp-test_ha-195596-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp ha-195596-m04:/home/docker/cp-test.txt ha-195596:/home/docker/cp-test_ha-195596-m04_ha-195596.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596 "sudo cat /home/docker/cp-test_ha-195596-m04_ha-195596.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp ha-195596-m04:/home/docker/cp-test.txt ha-195596-m02:/home/docker/cp-test_ha-195596-m04_ha-195596-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m02 "sudo cat /home/docker/cp-test_ha-195596-m04_ha-195596-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 cp ha-195596-m04:/home/docker/cp-test.txt ha-195596-m03:/home/docker/cp-test_ha-195596-m04_ha-195596-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 ssh -n ha-195596-m03 "sudo cat /home/docker/cp-test_ha-195596-m04_ha-195596-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (82.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 node stop m02 --alsologtostderr -v 5
E1216 02:49:54.391938    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:49:54.398325    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:49:54.409708    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:49:54.431042    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:49:54.472409    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:49:54.553758    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:49:54.715971    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:49:55.037750    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:49:55.679452    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:49:56.961323    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:49:59.523224    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:50:04.645460    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:50:14.887487    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:50:35.369403    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:50:49.065129    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-195596 node stop m02 --alsologtostderr -v 5: (1m21.570902235s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-195596 status --alsologtostderr -v 5: exit status 7 (493.426369ms)

                                                
                                                
-- stdout --
	ha-195596
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-195596-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-195596-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-195596-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 02:51:14.562220   22551 out.go:360] Setting OutFile to fd 1 ...
	I1216 02:51:14.562325   22551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:51:14.562333   22551 out.go:374] Setting ErrFile to fd 2...
	I1216 02:51:14.562337   22551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 02:51:14.562518   22551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 02:51:14.562663   22551 out.go:368] Setting JSON to false
	I1216 02:51:14.562686   22551 mustload.go:66] Loading cluster: ha-195596
	I1216 02:51:14.562752   22551 notify.go:221] Checking for updates...
	I1216 02:51:14.563122   22551 config.go:182] Loaded profile config "ha-195596": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 02:51:14.563142   22551 status.go:174] checking status of ha-195596 ...
	I1216 02:51:14.565137   22551 status.go:371] ha-195596 host status = "Running" (err=<nil>)
	I1216 02:51:14.565150   22551 host.go:66] Checking if "ha-195596" exists ...
	I1216 02:51:14.567645   22551 main.go:143] libmachine: domain ha-195596 has defined MAC address 52:54:00:47:ac:3f in network mk-ha-195596
	I1216 02:51:14.568187   22551 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:47:ac:3f", ip: ""} in network mk-ha-195596: {Iface:virbr1 ExpiryTime:2025-12-16 03:45:54 +0000 UTC Type:0 Mac:52:54:00:47:ac:3f Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ha-195596 Clientid:01:52:54:00:47:ac:3f}
	I1216 02:51:14.568214   22551 main.go:143] libmachine: domain ha-195596 has defined IP address 192.168.39.242 and MAC address 52:54:00:47:ac:3f in network mk-ha-195596
	I1216 02:51:14.568369   22551 host.go:66] Checking if "ha-195596" exists ...
	I1216 02:51:14.568588   22551 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:51:14.570709   22551 main.go:143] libmachine: domain ha-195596 has defined MAC address 52:54:00:47:ac:3f in network mk-ha-195596
	I1216 02:51:14.571149   22551 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:47:ac:3f", ip: ""} in network mk-ha-195596: {Iface:virbr1 ExpiryTime:2025-12-16 03:45:54 +0000 UTC Type:0 Mac:52:54:00:47:ac:3f Iaid: IPaddr:192.168.39.242 Prefix:24 Hostname:ha-195596 Clientid:01:52:54:00:47:ac:3f}
	I1216 02:51:14.571198   22551 main.go:143] libmachine: domain ha-195596 has defined IP address 192.168.39.242 and MAC address 52:54:00:47:ac:3f in network mk-ha-195596
	I1216 02:51:14.571401   22551 sshutil.go:53] new ssh client: &{IP:192.168.39.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/ha-195596/id_rsa Username:docker}
	I1216 02:51:14.662976   22551 ssh_runner.go:195] Run: systemctl --version
	I1216 02:51:14.668994   22551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 02:51:14.686756   22551 kubeconfig.go:125] found "ha-195596" server: "https://192.168.39.254:8443"
	I1216 02:51:14.686788   22551 api_server.go:166] Checking apiserver status ...
	I1216 02:51:14.686829   22551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:51:14.708598   22551 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1439/cgroup
	W1216 02:51:14.722128   22551 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1439/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:51:14.722202   22551 ssh_runner.go:195] Run: ls
	I1216 02:51:14.733013   22551 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1216 02:51:14.737996   22551 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1216 02:51:14.738015   22551 status.go:463] ha-195596 apiserver status = Running (err=<nil>)
	I1216 02:51:14.738022   22551 status.go:176] ha-195596 status: &{Name:ha-195596 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 02:51:14.738036   22551 status.go:174] checking status of ha-195596-m02 ...
	I1216 02:51:14.739521   22551 status.go:371] ha-195596-m02 host status = "Stopped" (err=<nil>)
	I1216 02:51:14.739541   22551 status.go:384] host is not running, skipping remaining checks
	I1216 02:51:14.739546   22551 status.go:176] ha-195596-m02 status: &{Name:ha-195596-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 02:51:14.739558   22551 status.go:174] checking status of ha-195596-m03 ...
	I1216 02:51:14.740687   22551 status.go:371] ha-195596-m03 host status = "Running" (err=<nil>)
	I1216 02:51:14.740700   22551 host.go:66] Checking if "ha-195596-m03" exists ...
	I1216 02:51:14.742936   22551 main.go:143] libmachine: domain ha-195596-m03 has defined MAC address 52:54:00:38:57:ba in network mk-ha-195596
	I1216 02:51:14.743277   22551 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:38:57:ba", ip: ""} in network mk-ha-195596: {Iface:virbr1 ExpiryTime:2025-12-16 03:47:48 +0000 UTC Type:0 Mac:52:54:00:38:57:ba Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-195596-m03 Clientid:01:52:54:00:38:57:ba}
	I1216 02:51:14.743302   22551 main.go:143] libmachine: domain ha-195596-m03 has defined IP address 192.168.39.140 and MAC address 52:54:00:38:57:ba in network mk-ha-195596
	I1216 02:51:14.743435   22551 host.go:66] Checking if "ha-195596-m03" exists ...
	I1216 02:51:14.743616   22551 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:51:14.745999   22551 main.go:143] libmachine: domain ha-195596-m03 has defined MAC address 52:54:00:38:57:ba in network mk-ha-195596
	I1216 02:51:14.746405   22551 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:38:57:ba", ip: ""} in network mk-ha-195596: {Iface:virbr1 ExpiryTime:2025-12-16 03:47:48 +0000 UTC Type:0 Mac:52:54:00:38:57:ba Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:ha-195596-m03 Clientid:01:52:54:00:38:57:ba}
	I1216 02:51:14.746434   22551 main.go:143] libmachine: domain ha-195596-m03 has defined IP address 192.168.39.140 and MAC address 52:54:00:38:57:ba in network mk-ha-195596
	I1216 02:51:14.746595   22551 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/ha-195596-m03/id_rsa Username:docker}
	I1216 02:51:14.825982   22551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 02:51:14.850207   22551 kubeconfig.go:125] found "ha-195596" server: "https://192.168.39.254:8443"
	I1216 02:51:14.850246   22551 api_server.go:166] Checking apiserver status ...
	I1216 02:51:14.850309   22551 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 02:51:14.868094   22551 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1791/cgroup
	W1216 02:51:14.878611   22551 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1791/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 02:51:14.878661   22551 ssh_runner.go:195] Run: ls
	I1216 02:51:14.884421   22551 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1216 02:51:14.889054   22551 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1216 02:51:14.889079   22551 status.go:463] ha-195596-m03 apiserver status = Running (err=<nil>)
	I1216 02:51:14.889091   22551 status.go:176] ha-195596-m03 status: &{Name:ha-195596-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 02:51:14.889108   22551 status.go:174] checking status of ha-195596-m04 ...
	I1216 02:51:14.890491   22551 status.go:371] ha-195596-m04 host status = "Running" (err=<nil>)
	I1216 02:51:14.890505   22551 host.go:66] Checking if "ha-195596-m04" exists ...
	I1216 02:51:14.892846   22551 main.go:143] libmachine: domain ha-195596-m04 has defined MAC address 52:54:00:bc:b8:28 in network mk-ha-195596
	I1216 02:51:14.893257   22551 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:b8:28", ip: ""} in network mk-ha-195596: {Iface:virbr1 ExpiryTime:2025-12-16 03:49:13 +0000 UTC Type:0 Mac:52:54:00:bc:b8:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-195596-m04 Clientid:01:52:54:00:bc:b8:28}
	I1216 02:51:14.893287   22551 main.go:143] libmachine: domain ha-195596-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:bc:b8:28 in network mk-ha-195596
	I1216 02:51:14.893430   22551 host.go:66] Checking if "ha-195596-m04" exists ...
	I1216 02:51:14.893647   22551 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 02:51:14.895655   22551 main.go:143] libmachine: domain ha-195596-m04 has defined MAC address 52:54:00:bc:b8:28 in network mk-ha-195596
	I1216 02:51:14.895970   22551 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:bc:b8:28", ip: ""} in network mk-ha-195596: {Iface:virbr1 ExpiryTime:2025-12-16 03:49:13 +0000 UTC Type:0 Mac:52:54:00:bc:b8:28 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-195596-m04 Clientid:01:52:54:00:bc:b8:28}
	I1216 02:51:14.895994   22551 main.go:143] libmachine: domain ha-195596-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:bc:b8:28 in network mk-ha-195596
	I1216 02:51:14.896146   22551 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/ha-195596-m04/id_rsa Username:docker}
	I1216 02:51:14.980526   22551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 02:51:14.997709   22551 status.go:176] ha-195596-m04 status: &{Name:ha-195596-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (82.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (31.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 node start m02 --alsologtostderr -v 5
E1216 02:51:16.331119    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-195596 node start m02 --alsologtostderr -v 5: (30.903198295s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (31.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (355.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 stop --alsologtostderr -v 5
E1216 02:52:38.254157    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:53:09.599246    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:54:54.392711    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:55:22.097125    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:55:49.067722    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-195596 stop --alsologtostderr -v 5: (4m7.873536518s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 start --wait true --alsologtostderr -v 5
E1216 02:57:12.131005    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-195596 start --wait true --alsologtostderr -v 5: (1m47.943380674s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (355.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-195596 node delete m03 --alsologtostderr -v 5: (17.105682212s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (254.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 stop --alsologtostderr -v 5
E1216 02:58:09.599004    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 02:59:54.395483    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:00:49.067849    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-195596 stop --alsologtostderr -v 5: (4m14.755542893s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-195596 status --alsologtostderr -v 5: exit status 7 (67.609991ms)

                                                
                                                
-- stdout --
	ha-195596
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-195596-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-195596-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:02:17.131308   25638 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:02:17.131586   25638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:02:17.131596   25638 out.go:374] Setting ErrFile to fd 2...
	I1216 03:02:17.131601   25638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:02:17.131767   25638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 03:02:17.131950   25638 out.go:368] Setting JSON to false
	I1216 03:02:17.131974   25638 mustload.go:66] Loading cluster: ha-195596
	I1216 03:02:17.132029   25638 notify.go:221] Checking for updates...
	I1216 03:02:17.132306   25638 config.go:182] Loaded profile config "ha-195596": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:02:17.132321   25638 status.go:174] checking status of ha-195596 ...
	I1216 03:02:17.134274   25638 status.go:371] ha-195596 host status = "Stopped" (err=<nil>)
	I1216 03:02:17.134289   25638 status.go:384] host is not running, skipping remaining checks
	I1216 03:02:17.134295   25638 status.go:176] ha-195596 status: &{Name:ha-195596 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 03:02:17.134310   25638 status.go:174] checking status of ha-195596-m02 ...
	I1216 03:02:17.135297   25638 status.go:371] ha-195596-m02 host status = "Stopped" (err=<nil>)
	I1216 03:02:17.135308   25638 status.go:384] host is not running, skipping remaining checks
	I1216 03:02:17.135312   25638 status.go:176] ha-195596-m02 status: &{Name:ha-195596-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 03:02:17.135325   25638 status.go:174] checking status of ha-195596-m04 ...
	I1216 03:02:17.136352   25638 status.go:371] ha-195596-m04 host status = "Stopped" (err=<nil>)
	I1216 03:02:17.136363   25638 status.go:384] host is not running, skipping remaining checks
	I1216 03:02:17.136367   25638 status.go:176] ha-195596-m04 status: &{Name:ha-195596-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (254.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (91.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1216 03:03:09.598668    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-195596 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m31.206688651s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (91.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (70.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 node add --control-plane --alsologtostderr -v 5
E1216 03:04:54.391507    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-195596 node add --control-plane --alsologtostderr -v 5: (1m10.335448918s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-195596 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (70.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.34s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-744591 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1216 03:05:49.068374    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:06:12.676637    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:06:17.459504    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-744591 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m16.34113803s)
--- PASS: TestJSONOutput/start/Command (76.34s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-744591 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-744591 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-744591 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-744591 --output=json --user=testUser: (6.79220951s)
--- PASS: TestJSONOutput/stop/Command (6.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-890536 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-890536 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (75.24633ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3fc8e650-0158-424f-8b63-4d3ec637e2c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-890536] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb5d4bfc-f1d5-4032-997e-cf1df5ca676d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22158"}}
	{"specversion":"1.0","id":"9a1d0953-9f8d-435d-8815-bf439b2fbec3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"82cebd30-c45e-4e06-948b-b0c27dffc40b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig"}}
	{"specversion":"1.0","id":"5286b4dc-1ec0-4035-bd78-092419de2db7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube"}}
	{"specversion":"1.0","id":"34acb927-c081-4cba-a1f0-9764603503ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9f9069c3-e33b-4a5a-9ddd-478448b59708","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bfdc571f-7137-4780-929f-440cce0f10b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-890536" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-890536
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (74.81s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-491033 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-491033 --driver=kvm2  --container-runtime=crio: (37.06828755s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-493329 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-493329 --driver=kvm2  --container-runtime=crio: (35.258973562s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-491033
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-493329
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-493329" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-493329
helpers_test.go:176: Cleaning up "first-491033" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-491033
--- PASS: TestMinikubeProfile (74.81s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (19.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-022299 --memory=3072 --mount-string /tmp/TestMountStartserial1538852134/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-022299 --memory=3072 --mount-string /tmp/TestMountStartserial1538852134/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (18.419109358s)
--- PASS: TestMountStart/serial/StartWithMountFirst (19.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-022299 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-022299 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (18.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-037955 --memory=3072 --mount-string /tmp/TestMountStartserial1538852134/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1216 03:08:09.598719    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-037955 --memory=3072 --mount-string /tmp/TestMountStartserial1538852134/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (17.630318472s)
--- PASS: TestMountStart/serial/StartWithMountSecond (18.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-037955 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-037955 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-022299 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-037955 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-037955 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-037955
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-037955: (1.280508156s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-037955
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-037955: (16.996998599s)
--- PASS: TestMountStart/serial/RestartStopped (18.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-037955 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-037955 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (92.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-496255 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1216 03:09:54.392176    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-496255 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m32.410248346s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (92.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-496255 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-496255 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-496255 -- rollout status deployment/busybox: (4.556584252s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-496255 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-496255 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-496255 -- exec busybox-7b57f96db7-2z6kb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-496255 -- exec busybox-7b57f96db7-c6rp8 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-496255 -- exec busybox-7b57f96db7-2z6kb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-496255 -- exec busybox-7b57f96db7-c6rp8 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-496255 -- exec busybox-7b57f96db7-2z6kb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-496255 -- exec busybox-7b57f96db7-c6rp8 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.06s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-496255 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-496255 -- exec busybox-7b57f96db7-2z6kb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-496255 -- exec busybox-7b57f96db7-2z6kb -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-496255 -- exec busybox-7b57f96db7-c6rp8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-496255 -- exec busybox-7b57f96db7-c6rp8 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (40.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-496255 -v=5 --alsologtostderr
E1216 03:10:49.065100    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-496255 -v=5 --alsologtostderr: (40.214951398s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (40.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-496255 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 cp testdata/cp-test.txt multinode-496255:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 cp multinode-496255:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1313016824/001/cp-test_multinode-496255.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 cp multinode-496255:/home/docker/cp-test.txt multinode-496255-m02:/home/docker/cp-test_multinode-496255_multinode-496255-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255-m02 "sudo cat /home/docker/cp-test_multinode-496255_multinode-496255-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 cp multinode-496255:/home/docker/cp-test.txt multinode-496255-m03:/home/docker/cp-test_multinode-496255_multinode-496255-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255-m03 "sudo cat /home/docker/cp-test_multinode-496255_multinode-496255-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 cp testdata/cp-test.txt multinode-496255-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 cp multinode-496255-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1313016824/001/cp-test_multinode-496255-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 cp multinode-496255-m02:/home/docker/cp-test.txt multinode-496255:/home/docker/cp-test_multinode-496255-m02_multinode-496255.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255 "sudo cat /home/docker/cp-test_multinode-496255-m02_multinode-496255.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 cp multinode-496255-m02:/home/docker/cp-test.txt multinode-496255-m03:/home/docker/cp-test_multinode-496255-m02_multinode-496255-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255-m03 "sudo cat /home/docker/cp-test_multinode-496255-m02_multinode-496255-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 cp testdata/cp-test.txt multinode-496255-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 cp multinode-496255-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1313016824/001/cp-test_multinode-496255-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 cp multinode-496255-m03:/home/docker/cp-test.txt multinode-496255:/home/docker/cp-test_multinode-496255-m03_multinode-496255.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255 "sudo cat /home/docker/cp-test_multinode-496255-m03_multinode-496255.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 cp multinode-496255-m03:/home/docker/cp-test.txt multinode-496255-m02:/home/docker/cp-test_multinode-496255-m03_multinode-496255-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 ssh -n multinode-496255-m02 "sudo cat /home/docker/cp-test_multinode-496255-m03_multinode-496255-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-496255 node stop m03: (1.493152973s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-496255 status: exit status 7 (317.303133ms)

                                                
                                                
-- stdout --
	multinode-496255
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-496255-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-496255-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-496255 status --alsologtostderr: exit status 7 (326.038719ms)

                                                
                                                
-- stdout --
	multinode-496255
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-496255-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-496255-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:11:12.657498   31046 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:11:12.657616   31046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:11:12.657624   31046 out.go:374] Setting ErrFile to fd 2...
	I1216 03:11:12.657627   31046 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:11:12.657799   31046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 03:11:12.657960   31046 out.go:368] Setting JSON to false
	I1216 03:11:12.657984   31046 mustload.go:66] Loading cluster: multinode-496255
	I1216 03:11:12.658097   31046 notify.go:221] Checking for updates...
	I1216 03:11:12.658329   31046 config.go:182] Loaded profile config "multinode-496255": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:11:12.658343   31046 status.go:174] checking status of multinode-496255 ...
	I1216 03:11:12.660432   31046 status.go:371] multinode-496255 host status = "Running" (err=<nil>)
	I1216 03:11:12.660449   31046 host.go:66] Checking if "multinode-496255" exists ...
	I1216 03:11:12.662986   31046 main.go:143] libmachine: domain multinode-496255 has defined MAC address 52:54:00:fb:17:f3 in network mk-multinode-496255
	I1216 03:11:12.663414   31046 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:17:f3", ip: ""} in network mk-multinode-496255: {Iface:virbr1 ExpiryTime:2025-12-16 04:08:58 +0000 UTC Type:0 Mac:52:54:00:fb:17:f3 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:multinode-496255 Clientid:01:52:54:00:fb:17:f3}
	I1216 03:11:12.663441   31046 main.go:143] libmachine: domain multinode-496255 has defined IP address 192.168.39.243 and MAC address 52:54:00:fb:17:f3 in network mk-multinode-496255
	I1216 03:11:12.663614   31046 host.go:66] Checking if "multinode-496255" exists ...
	I1216 03:11:12.663860   31046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:11:12.666127   31046 main.go:143] libmachine: domain multinode-496255 has defined MAC address 52:54:00:fb:17:f3 in network mk-multinode-496255
	I1216 03:11:12.666603   31046 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:17:f3", ip: ""} in network mk-multinode-496255: {Iface:virbr1 ExpiryTime:2025-12-16 04:08:58 +0000 UTC Type:0 Mac:52:54:00:fb:17:f3 Iaid: IPaddr:192.168.39.243 Prefix:24 Hostname:multinode-496255 Clientid:01:52:54:00:fb:17:f3}
	I1216 03:11:12.666658   31046 main.go:143] libmachine: domain multinode-496255 has defined IP address 192.168.39.243 and MAC address 52:54:00:fb:17:f3 in network mk-multinode-496255
	I1216 03:11:12.666801   31046 sshutil.go:53] new ssh client: &{IP:192.168.39.243 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/multinode-496255/id_rsa Username:docker}
	I1216 03:11:12.754560   31046 ssh_runner.go:195] Run: systemctl --version
	I1216 03:11:12.761788   31046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:11:12.780673   31046 kubeconfig.go:125] found "multinode-496255" server: "https://192.168.39.243:8443"
	I1216 03:11:12.780703   31046 api_server.go:166] Checking apiserver status ...
	I1216 03:11:12.780736   31046 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 03:11:12.801405   31046 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1398/cgroup
	W1216 03:11:12.813407   31046 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1398/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1216 03:11:12.813443   31046 ssh_runner.go:195] Run: ls
	I1216 03:11:12.817958   31046 api_server.go:253] Checking apiserver healthz at https://192.168.39.243:8443/healthz ...
	I1216 03:11:12.823057   31046 api_server.go:279] https://192.168.39.243:8443/healthz returned 200:
	ok
	I1216 03:11:12.823072   31046 status.go:463] multinode-496255 apiserver status = Running (err=<nil>)
	I1216 03:11:12.823080   31046 status.go:176] multinode-496255 status: &{Name:multinode-496255 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 03:11:12.823097   31046 status.go:174] checking status of multinode-496255-m02 ...
	I1216 03:11:12.824608   31046 status.go:371] multinode-496255-m02 host status = "Running" (err=<nil>)
	I1216 03:11:12.824623   31046 host.go:66] Checking if "multinode-496255-m02" exists ...
	I1216 03:11:12.826746   31046 main.go:143] libmachine: domain multinode-496255-m02 has defined MAC address 52:54:00:1b:85:6b in network mk-multinode-496255
	I1216 03:11:12.827071   31046 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1b:85:6b", ip: ""} in network mk-multinode-496255: {Iface:virbr1 ExpiryTime:2025-12-16 04:09:49 +0000 UTC Type:0 Mac:52:54:00:1b:85:6b Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-496255-m02 Clientid:01:52:54:00:1b:85:6b}
	I1216 03:11:12.827098   31046 main.go:143] libmachine: domain multinode-496255-m02 has defined IP address 192.168.39.229 and MAC address 52:54:00:1b:85:6b in network mk-multinode-496255
	I1216 03:11:12.827231   31046 host.go:66] Checking if "multinode-496255-m02" exists ...
	I1216 03:11:12.827432   31046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 03:11:12.829392   31046 main.go:143] libmachine: domain multinode-496255-m02 has defined MAC address 52:54:00:1b:85:6b in network mk-multinode-496255
	I1216 03:11:12.829748   31046 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:1b:85:6b", ip: ""} in network mk-multinode-496255: {Iface:virbr1 ExpiryTime:2025-12-16 04:09:49 +0000 UTC Type:0 Mac:52:54:00:1b:85:6b Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-496255-m02 Clientid:01:52:54:00:1b:85:6b}
	I1216 03:11:12.829780   31046 main.go:143] libmachine: domain multinode-496255-m02 has defined IP address 192.168.39.229 and MAC address 52:54:00:1b:85:6b in network mk-multinode-496255
	I1216 03:11:12.829908   31046 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22158-5036/.minikube/machines/multinode-496255-m02/id_rsa Username:docker}
	I1216 03:11:12.912223   31046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 03:11:12.926984   31046 status.go:176] multinode-496255-m02 status: &{Name:multinode-496255-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1216 03:11:12.927008   31046 status.go:174] checking status of multinode-496255-m03 ...
	I1216 03:11:12.928601   31046 status.go:371] multinode-496255-m03 host status = "Stopped" (err=<nil>)
	I1216 03:11:12.928620   31046 status.go:384] host is not running, skipping remaining checks
	I1216 03:11:12.928627   31046 status.go:176] multinode-496255-m03 status: &{Name:multinode-496255-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-496255 node start m03 -v=5 --alsologtostderr: (36.228898771s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (291.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-496255
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-496255
E1216 03:13:09.598828    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:13:52.134467    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-496255: (2m43.000076975s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-496255 --wait=true -v=5 --alsologtostderr
E1216 03:14:54.392522    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:15:49.065126    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-496255 --wait=true -v=5 --alsologtostderr: (2m8.055586421s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-496255
--- PASS: TestMultiNode/serial/RestartKeepsNodes (291.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-496255 node delete m03: (2.278497793s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (169.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 stop
E1216 03:18:09.599394    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-496255 stop: (2m49.358020311s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-496255 status: exit status 7 (59.480318ms)

                                                
                                                
-- stdout --
	multinode-496255
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-496255-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-496255 status --alsologtostderr: exit status 7 (57.447588ms)

                                                
                                                
-- stdout --
	multinode-496255
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-496255-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:19:32.994628   33783 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:19:32.994873   33783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:19:32.994882   33783 out.go:374] Setting ErrFile to fd 2...
	I1216 03:19:32.994886   33783 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:19:32.995088   33783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 03:19:32.995244   33783 out.go:368] Setting JSON to false
	I1216 03:19:32.995266   33783 mustload.go:66] Loading cluster: multinode-496255
	I1216 03:19:32.995326   33783 notify.go:221] Checking for updates...
	I1216 03:19:32.995571   33783 config.go:182] Loaded profile config "multinode-496255": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:19:32.995582   33783 status.go:174] checking status of multinode-496255 ...
	I1216 03:19:32.997588   33783 status.go:371] multinode-496255 host status = "Stopped" (err=<nil>)
	I1216 03:19:32.997602   33783 status.go:384] host is not running, skipping remaining checks
	I1216 03:19:32.997607   33783 status.go:176] multinode-496255 status: &{Name:multinode-496255 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 03:19:32.997629   33783 status.go:174] checking status of multinode-496255-m02 ...
	I1216 03:19:32.998782   33783 status.go:371] multinode-496255-m02 host status = "Stopped" (err=<nil>)
	I1216 03:19:32.998795   33783 status.go:384] host is not running, skipping remaining checks
	I1216 03:19:32.998799   33783 status.go:176] multinode-496255-m02 status: &{Name:multinode-496255-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (169.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (90.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-496255 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1216 03:19:54.391574    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:20:49.064480    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-496255 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m29.745598099s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-496255 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (90.17s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-496255
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-496255-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-496255-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (72.972153ms)

                                                
                                                
-- stdout --
	* [multinode-496255-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-496255-m02' is duplicated with machine name 'multinode-496255-m02' in profile 'multinode-496255'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-496255-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-496255-m03 --driver=kvm2  --container-runtime=crio: (35.927509561s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-496255
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-496255: exit status 80 (197.358781ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-496255 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-496255-m03 already exists in multinode-496255-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-496255-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.05s)

                                                
                                    
x
+
TestScheduledStopUnix (106.51s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-614086 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-614086 --memory=3072 --driver=kvm2  --container-runtime=crio: (34.956577742s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-614086 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 03:24:41.448079   36042 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:24:41.448308   36042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:24:41.448321   36042 out.go:374] Setting ErrFile to fd 2...
	I1216 03:24:41.448325   36042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:24:41.448521   36042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 03:24:41.448833   36042 out.go:368] Setting JSON to false
	I1216 03:24:41.448953   36042 mustload.go:66] Loading cluster: scheduled-stop-614086
	I1216 03:24:41.449369   36042 config.go:182] Loaded profile config "scheduled-stop-614086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:24:41.449432   36042 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/config.json ...
	I1216 03:24:41.449611   36042 mustload.go:66] Loading cluster: scheduled-stop-614086
	I1216 03:24:41.449701   36042 config.go:182] Loaded profile config "scheduled-stop-614086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-614086 -n scheduled-stop-614086
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-614086 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 03:24:41.716842   36104 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:24:41.716952   36104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:24:41.716962   36104 out.go:374] Setting ErrFile to fd 2...
	I1216 03:24:41.716968   36104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:24:41.717153   36104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 03:24:41.717379   36104 out.go:368] Setting JSON to false
	I1216 03:24:41.717602   36104 daemonize_unix.go:73] killing process 36078 as it is an old scheduled stop
	I1216 03:24:41.717703   36104 mustload.go:66] Loading cluster: scheduled-stop-614086
	I1216 03:24:41.718035   36104 config.go:182] Loaded profile config "scheduled-stop-614086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:24:41.718100   36104 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/config.json ...
	I1216 03:24:41.718261   36104 mustload.go:66] Loading cluster: scheduled-stop-614086
	I1216 03:24:41.718351   36104 config.go:182] Loaded profile config "scheduled-stop-614086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1216 03:24:41.721679    8974 retry.go:31] will retry after 111.01µs: open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/pid: no such file or directory
I1216 03:24:41.722839    8974 retry.go:31] will retry after 150.334µs: open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/pid: no such file or directory
I1216 03:24:41.723987    8974 retry.go:31] will retry after 311.751µs: open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/pid: no such file or directory
I1216 03:24:41.725105    8974 retry.go:31] will retry after 188.074µs: open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/pid: no such file or directory
I1216 03:24:41.726243    8974 retry.go:31] will retry after 451.405µs: open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/pid: no such file or directory
I1216 03:24:41.727358    8974 retry.go:31] will retry after 860.333µs: open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/pid: no such file or directory
I1216 03:24:41.728475    8974 retry.go:31] will retry after 1.097989ms: open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/pid: no such file or directory
I1216 03:24:41.730646    8974 retry.go:31] will retry after 2.536867ms: open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/pid: no such file or directory
I1216 03:24:41.733835    8974 retry.go:31] will retry after 2.233376ms: open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/pid: no such file or directory
I1216 03:24:41.737020    8974 retry.go:31] will retry after 1.995241ms: open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/pid: no such file or directory
I1216 03:24:41.739226    8974 retry.go:31] will retry after 7.868137ms: open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/pid: no such file or directory
I1216 03:24:41.747418    8974 retry.go:31] will retry after 11.756466ms: open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/pid: no such file or directory
I1216 03:24:41.759655    8974 retry.go:31] will retry after 12.587953ms: open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/pid: no such file or directory
I1216 03:24:41.772871    8974 retry.go:31] will retry after 21.119848ms: open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/pid: no such file or directory
I1216 03:24:41.795062    8974 retry.go:31] will retry after 22.049092ms: open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/pid: no such file or directory
I1216 03:24:41.817254    8974 retry.go:31] will retry after 60.08639ms: open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-614086 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1216 03:24:54.395361    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-614086 -n scheduled-stop-614086
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-614086
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-614086 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1216 03:25:07.419864   36252 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:25:07.419977   36252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:25:07.419988   36252 out.go:374] Setting ErrFile to fd 2...
	I1216 03:25:07.419995   36252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:25:07.420196   36252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 03:25:07.420426   36252 out.go:368] Setting JSON to false
	I1216 03:25:07.420521   36252 mustload.go:66] Loading cluster: scheduled-stop-614086
	I1216 03:25:07.420865   36252 config.go:182] Loaded profile config "scheduled-stop-614086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:25:07.420971   36252 profile.go:143] Saving config to /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/scheduled-stop-614086/config.json ...
	I1216 03:25:07.421182   36252 mustload.go:66] Loading cluster: scheduled-stop-614086
	I1216 03:25:07.421305   36252 config.go:182] Loaded profile config "scheduled-stop-614086": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1216 03:25:49.068062    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-614086
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-614086: exit status 7 (60.719939ms)

                                                
                                                
-- stdout --
	scheduled-stop-614086
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-614086 -n scheduled-stop-614086
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-614086 -n scheduled-stop-614086: exit status 7 (57.970967ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-614086" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-614086
--- PASS: TestScheduledStopUnix (106.51s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (368.32s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.88548450 start -p running-upgrade-418673 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.88548450 start -p running-upgrade-418673 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m28.321070152s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-418673 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-418673 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (4m35.828025417s)
helpers_test.go:176: Cleaning up "running-upgrade-418673" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-418673
--- PASS: TestRunningBinaryUpgrade (368.32s)

                                                
                                    
x
+
TestKubernetesUpgrade (232.71s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-352947 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-352947 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (38.788679723s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-352947
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-352947: (1.950887477s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-352947 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-352947 status --format={{.Host}}: exit status 7 (67.209517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-352947 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-352947 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.374923901s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-352947 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-352947 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-352947 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (90.423403ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-352947] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-beta.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-352947
	    minikube start -p kubernetes-upgrade-352947 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3529472 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-352947 --kubernetes-version=v1.35.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-352947 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-352947 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m5.46533614s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-352947" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-352947
--- PASS: TestKubernetesUpgrade (232.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-347940 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-347940 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (84.215265ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-347940] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (76.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-347940 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-347940 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.116978629s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-347940 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (76.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-347940 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-347940 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (24.760225943s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-347940 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-347940 status -o json: exit status 2 (188.902796ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-347940","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-347940
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (22.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-347940 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-347940 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (22.473284561s)
--- PASS: TestNoKubernetes/serial/Start (22.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (94.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3106553842 start -p stopped-upgrade-113517 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3106553842 start -p stopped-upgrade-113517 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (57.859878306s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3106553842 -p stopped-upgrade-113517 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3106553842 -p stopped-upgrade-113517 stop: (1.764021403s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-113517 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-113517 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (34.733943379s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (94.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22158-5036/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-347940 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-347940 "sudo systemctl is-active --quiet service kubelet": exit status 1 (149.174768ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
E1216 03:28:09.598502    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (14.790651962s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (15.096417672s)
--- PASS: TestNoKubernetes/serial/ProfileList (29.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-347940
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-347940: (1.428200311s)
--- PASS: TestNoKubernetes/serial/Stop (1.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (17.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-347940 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-347940 --driver=kvm2  --container-runtime=crio: (17.504265597s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (17.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-347940 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-347940 "sudo systemctl is-active --quiet service kubelet": exit status 1 (159.032695ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-113517
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-079027 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-079027 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (126.059949ms)

                                                
                                                
-- stdout --
	* [false-079027] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22158
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 03:29:18.766369   40095 out.go:360] Setting OutFile to fd 1 ...
	I1216 03:29:18.766670   40095 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:18.766681   40095 out.go:374] Setting ErrFile to fd 2...
	I1216 03:29:18.766687   40095 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1216 03:29:18.766966   40095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22158-5036/.minikube/bin
	I1216 03:29:18.767537   40095 out.go:368] Setting JSON to false
	I1216 03:29:18.768620   40095 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4304,"bootTime":1765851455,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1216 03:29:18.768692   40095 start.go:143] virtualization: kvm guest
	I1216 03:29:18.770763   40095 out.go:179] * [false-079027] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1216 03:29:18.772002   40095 out.go:179]   - MINIKUBE_LOCATION=22158
	I1216 03:29:18.772012   40095 notify.go:221] Checking for updates...
	I1216 03:29:18.774091   40095 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 03:29:18.775504   40095 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22158-5036/kubeconfig
	I1216 03:29:18.776715   40095 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22158-5036/.minikube
	I1216 03:29:18.777824   40095 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1216 03:29:18.778964   40095 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 03:29:18.780601   40095 config.go:182] Loaded profile config "force-systemd-env-050892": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1216 03:29:18.780719   40095 config.go:182] Loaded profile config "kubernetes-upgrade-352947": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1216 03:29:18.780826   40095 config.go:182] Loaded profile config "running-upgrade-418673": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1216 03:29:18.780949   40095 driver.go:422] Setting default libvirt URI to qemu:///system
	I1216 03:29:18.818408   40095 out.go:179] * Using the kvm2 driver based on user configuration
	I1216 03:29:18.819585   40095 start.go:309] selected driver: kvm2
	I1216 03:29:18.819603   40095 start.go:927] validating driver "kvm2" against <nil>
	I1216 03:29:18.819615   40095 start.go:938] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 03:29:18.821803   40095 out.go:203] 
	W1216 03:29:18.822899   40095 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1216 03:29:18.824002   40095 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-079027 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-079027

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-079027

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-079027

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-079027

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-079027

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-079027

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-079027

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-079027

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-079027

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-079027

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-079027

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-079027" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-079027" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:27:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.166:8443
name: kubernetes-upgrade-352947
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:27:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.109:8443
name: running-upgrade-418673
contexts:
- context:
cluster: kubernetes-upgrade-352947
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:27:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-352947
name: kubernetes-upgrade-352947
- context:
cluster: running-upgrade-418673
user: running-upgrade-418673
name: running-upgrade-418673
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-352947
user:
client-certificate: /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kubernetes-upgrade-352947/client.crt
client-key: /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kubernetes-upgrade-352947/client.key
- name: running-upgrade-418673
user:
client-certificate: /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/running-upgrade-418673/client.crt
client-key: /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/running-upgrade-418673/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-079027

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-079027"

                                                
                                                
----------------------- debugLogs end: false-079027 [took: 3.376544946s] --------------------------------
helpers_test.go:176: Cleaning up "false-079027" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-079027
--- PASS: TestNetworkPlugins/group/false (3.69s)

                                                
                                    
x
+
TestISOImage/Setup (20.63s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-064510 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-064510 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.625520653s)
--- PASS: TestISOImage/Setup (20.63s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "which iptables"
E1216 03:34:54.391844    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/Binaries/iptables (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.17s)

                                                
                                    
x
+
TestPause/serial/Start (113.33s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-127368 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E1216 03:29:54.391880    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-127368 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m53.32701525s)
--- PASS: TestPause/serial/Start (113.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (80.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m20.468825083s)
--- PASS: TestNetworkPlugins/group/auto/Start (80.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (56.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (56.711839057s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (56.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m20.213045223s)
--- PASS: TestNetworkPlugins/group/calico/Start (80.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-079027 "pgrep -a kubelet"
I1216 03:32:47.202632    8974 config.go:182] Loaded profile config "auto-079027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-079027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-fvdpk" [f1b70760-6cf1-45f0-8d1d-72e0b14fe531] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-fvdpk" [f1b70760-6cf1-45f0-8d1d-72e0b14fe531] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003545963s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-v7lq9" [180ddbec-12ac-4edd-9ddc-d5ec7a390ab9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005617887s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-079027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-079027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-079027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-079027 "pgrep -a kubelet"
I1216 03:33:04.194298    8974 config.go:182] Loaded profile config "kindnet-079027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-079027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-v2tlp" [4273e840-57ab-49bf-b0f5-767a95d25462] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 03:33:09.599002    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-v2tlp" [4273e840-57ab-49bf-b0f5-767a95d25462] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004682301s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m10.63935296s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-079027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-079027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-079027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (91.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m31.948783024s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (91.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m14.32875629s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-57zx5" [9ff5eaca-f041-4506-890a-5ce26c5540e6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006712254s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-079027 "pgrep -a kubelet"
I1216 03:34:06.863676    8974 config.go:182] Loaded profile config "calico-079027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-079027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-kc6rk" [3ed88726-49b2-4610-9cb4-317a44b98cb4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-kc6rk" [3ed88726-49b2-4610-9cb4-317a44b98cb4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.003893846s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-079027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-079027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-079027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-079027 "pgrep -a kubelet"
I1216 03:34:24.609004    8974 config.go:182] Loaded profile config "custom-flannel-079027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-079027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-2jntc" [89e3b5d5-7ae3-4ea8-a98e-957a89f34859] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-2jntc" [89e3b5d5-7ae3-4ea8-a98e-957a89f34859] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004595675s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-079027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-079027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-079027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (56.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-079027 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (56.371545673s)
--- PASS: TestNetworkPlugins/group/bridge/Start (56.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (93.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-738304 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-738304 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m33.211860859s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (93.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-079027 "pgrep -a kubelet"
I1216 03:35:03.432114    8974 config.go:182] Loaded profile config "enable-default-cni-079027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-079027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context enable-default-cni-079027 replace --force -f testdata/netcat-deployment.yaml: (1.407104804s)
I1216 03:35:04.881750    8974 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-cxx2l" [d53953b2-4173-4c64-8961-55c8ef0c834d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-cxx2l" [d53953b2-4173-4c64-8961-55c8ef0c834d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00461789s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-wxg2x" [307df3a5-be4e-464a-be5f-a842927d06b5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00717928s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-079027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-079027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-079027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-079027 "pgrep -a kubelet"
I1216 03:35:20.369279    8974 config.go:182] Loaded profile config "flannel-079027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-079027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-d7xbz" [7df76c3b-14c2-4d28-92a9-b5dd3bf9ee6b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-d7xbz" [7df76c3b-14c2-4d28-92a9-b5dd3bf9ee6b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.005189387s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (89.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-121600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-121600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (1m29.66873206s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (89.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-079027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-079027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-079027 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-079027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
I1216 03:35:34.094883    8974 config.go:182] Loaded profile config "bridge-079027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-079027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-bk6s9" [49f813ea-3380-4238-aa2a-875ec02a8539] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-bk6s9" [49f813ea-3380-4238-aa2a-875ec02a8539] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.00504113s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-079027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-079027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-079027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-136230 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-136230 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m24.030647658s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (54.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-002629 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-002629 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (54.022889003s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (54.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-738304 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [a35be06b-c977-4f0f-b45b-d2f61a06d802] Pending
helpers_test.go:353: "busybox" [a35be06b-c977-4f0f-b45b-d2f61a06d802] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [a35be06b-c977-4f0f-b45b-d2f61a06d802] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.004168269s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-738304 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-738304 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-738304 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.806393649s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-738304 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (86.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-738304 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-738304 --alsologtostderr -v=3: (1m26.388409824s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (86.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-002629 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-002629 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.041753819s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (86.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-002629 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-002629 --alsologtostderr -v=3: (1m26.807715044s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (86.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-121600 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [3cfd3f50-c20d-4ddf-b46e-59cc87f66453] Pending
helpers_test.go:353: "busybox" [3cfd3f50-c20d-4ddf-b46e-59cc87f66453] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [3cfd3f50-c20d-4ddf-b46e-59cc87f66453] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004594659s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-121600 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-136230 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [49863e64-7178-4d8a-8549-095d0529e118] Pending
helpers_test.go:353: "busybox" [49863e64-7178-4d8a-8549-095d0529e118] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [49863e64-7178-4d8a-8549-095d0529e118] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00331695s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-136230 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-121600 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-121600 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (86.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-121600 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-121600 --alsologtostderr -v=3: (1m26.013933773s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (86.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-136230 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-136230 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (87.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-136230 --alsologtostderr -v=3
E1216 03:37:47.402672    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/auto-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:47.409002    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/auto-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:47.420379    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/auto-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:47.441754    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/auto-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:47.483128    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/auto-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:47.564568    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/auto-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:47.726127    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/auto-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:48.047900    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/auto-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:48.690126    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/auto-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:49.972325    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/auto-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:52.533659    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/auto-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:57.655583    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/auto-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:57.984152    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:57.990618    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:58.001952    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:58.023258    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:58.064635    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:58.146140    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:58.307639    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:58.628970    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:37:59.271465    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:38:00.552836    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:38:03.115094    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-136230 --alsologtostderr -v=3: (1m27.493560815s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (87.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738304 -n old-k8s-version-738304
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738304 -n old-k8s-version-738304: exit status 7 (56.167787ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-738304 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-738304 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
E1216 03:38:07.896849    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/auto-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:38:08.236812    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:38:09.598727    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:38:18.478919    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-738304 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (44.079947962s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-738304 -n old-k8s-version-738304
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-002629 -n newest-cni-002629
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-002629 -n newest-cni-002629: exit status 7 (58.580719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-002629 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-002629 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1216 03:38:28.378607    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/auto-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:38:38.960669    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-002629 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (32.462966881s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-002629 -n newest-cni-002629
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-121600 -n no-preload-121600
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-121600 -n no-preload-121600: exit status 7 (72.039449ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-121600 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (58.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-121600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-121600 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: (58.390603848s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-121600 -n no-preload-121600
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (58.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-136230 -n default-k8s-diff-port-136230
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-136230 -n default-k8s-diff-port-136230: exit status 7 (84.065137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-136230 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (64.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-136230 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-136230 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m3.964667329s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-136230 -n default-k8s-diff-port-136230
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (64.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-vr59x" [7d8ad462-d2dd-450c-8d32-ebffd47e0cf7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-vr59x" [7d8ad462-d2dd-450c-8d32-ebffd47e0cf7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.003880388s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-002629 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-002629 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-002629 --alsologtostderr -v=1: (1.976387374s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-002629 -n newest-cni-002629
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-002629 -n newest-cni-002629: exit status 2 (333.125919ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-002629 -n newest-cni-002629
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-002629 -n newest-cni-002629: exit status 2 (323.871656ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-002629 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-002629 --alsologtostderr -v=1: (1.055545724s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-002629 -n newest-cni-002629
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-002629 -n newest-cni-002629
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-301285 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1216 03:39:00.647946    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/calico-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:00.669966    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/calico-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:00.711410    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/calico-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:00.793563    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/calico-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:00.955299    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/calico-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:01.277134    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/calico-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:01.918730    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/calico-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:03.200480    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/calico-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-301285 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (1m29.677549141s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-vr59x" [7d8ad462-d2dd-450c-8d32-ebffd47e0cf7] Running
E1216 03:39:05.762772    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/calico-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005384674s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-738304 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-738304 image list --format=json
E1216 03:39:09.340702    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/auto-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-738304 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-738304 -n old-k8s-version-738304
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-738304 -n old-k8s-version-738304: exit status 2 (249.007179ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-738304 -n old-k8s-version-738304
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-738304 -n old-k8s-version-738304: exit status 2 (242.824074ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-738304 --alsologtostderr -v=1
E1216 03:39:10.885072    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/calico-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-738304 -n old-k8s-version-738304
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-738304 -n old-k8s-version-738304
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.78s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0.24s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "cat /version.json"
iso_test.go:116: Successfully parsed /version.json:
iso_test.go:118:   iso_version: v1.37.0-1765836331-22158
iso_test.go:118:   kicbase_version: v0.0.48-1765575274-22117
iso_test.go:118:   minikube_version: v1.37.0
iso_test.go:118:   commit: 223ff1da6c8a6a3d53f659294dcb5c0a9744c10e
--- PASS: TestISOImage/VersionJSON (0.24s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.24s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-064510 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.24s)
E1216 03:39:19.922114    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:21.126845    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/calico-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:24.873870    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/custom-flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:24.880351    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/custom-flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:24.891785    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/custom-flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:24.913200    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/custom-flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:24.955407    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/custom-flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:25.036960    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/custom-flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:25.198536    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/custom-flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:25.520190    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/custom-flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:26.162259    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/custom-flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:27.444486    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/custom-flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:30.006519    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/custom-flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:32.682014    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/addons-703051/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:35.128078    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/custom-flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:37.464057    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-qcgml" [2f1056fe-ed73-42cf-9a27-253560f7ce58] Running
E1216 03:39:41.608238    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/calico-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:45.369429    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/custom-flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004480192s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-qcgml" [2f1056fe-ed73-42cf-9a27-253560f7ce58] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003774953s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-121600 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-121600 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-121600 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-121600 -n no-preload-121600
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-121600 -n no-preload-121600: exit status 2 (211.056744ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-121600 -n no-preload-121600
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-121600 -n no-preload-121600: exit status 2 (234.118363ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-121600 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-121600 --alsologtostderr -v=1: (1.125383395s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-121600 -n no-preload-121600
E1216 03:39:54.392151    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-668205/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-121600 -n no-preload-121600
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-zpksb" [63a98247-a0dc-4409-9d6a-0b720475d4bc] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003324425s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-zpksb" [63a98247-a0dc-4409-9d6a-0b720475d4bc] Running
E1216 03:40:04.841586    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/enable-default-cni-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:04.847943    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/enable-default-cni-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:04.859268    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/enable-default-cni-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:04.880744    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/enable-default-cni-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:04.922087    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/enable-default-cni-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:05.003504    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/enable-default-cni-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:05.165006    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/enable-default-cni-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:05.486667    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/enable-default-cni-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:05.851332    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/custom-flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:06.128728    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/enable-default-cni-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004188122s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-136230 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-136230 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-136230 --alsologtostderr -v=1
E1216 03:40:07.410213    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/enable-default-cni-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-136230 -n default-k8s-diff-port-136230
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-136230 -n default-k8s-diff-port-136230: exit status 2 (203.342371ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-136230 -n default-k8s-diff-port-136230
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-136230 -n default-k8s-diff-port-136230: exit status 2 (198.427748ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-136230 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-136230 -n default-k8s-diff-port-136230
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-136230 -n default-k8s-diff-port-136230
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-301285 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [c3fff346-a540-4044-b0d4-34e79dac7a6a] Pending
E1216 03:40:31.262632    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/auto-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [c3fff346-a540-4044-b0d4-34e79dac7a6a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1216 03:40:34.369807    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/bridge-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:34.376156    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/bridge-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:34.387491    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/bridge-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:34.409078    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/bridge-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:34.450467    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/bridge-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [c3fff346-a540-4044-b0d4-34e79dac7a6a] Running
E1216 03:40:34.532496    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/bridge-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:34.651979    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:34.694362    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/bridge-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:35.016283    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/bridge-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:35.657734    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/bridge-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:36.939528    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/bridge-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003748701s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-301285 exec busybox -- /bin/sh -c "ulimit -n"
E1216 03:40:39.501368    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/bridge-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-301285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-301285 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (82.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-301285 --alsologtostderr -v=3
E1216 03:40:41.843489    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:44.623330    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/bridge-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:45.817588    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/enable-default-cni-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:46.812878    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/custom-flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:49.065300    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/functional-660584/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:54.864875    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/bridge-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:40:55.133724    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:15.346242    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/bridge-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:26.780078    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/enable-default-cni-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:28.127126    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/old-k8s-version-738304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:28.133529    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/old-k8s-version-738304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:28.144908    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/old-k8s-version-738304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:28.166330    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/old-k8s-version-738304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:28.207718    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/old-k8s-version-738304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:28.289172    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/old-k8s-version-738304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:28.450713    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/old-k8s-version-738304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:28.772409    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/old-k8s-version-738304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:29.414631    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/old-k8s-version-738304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:30.696828    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/old-k8s-version-738304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:33.259129    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/old-k8s-version-738304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:36.095392    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:38.380563    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/old-k8s-version-738304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:44.491429    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/calico-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:48.622671    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/old-k8s-version-738304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:41:56.308328    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/bridge-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:02.514228    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/no-preload-121600/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:02.520656    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/no-preload-121600/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:02.532062    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/no-preload-121600/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:02.553445    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/no-preload-121600/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:02.594845    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/no-preload-121600/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:02.676254    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/no-preload-121600/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:02.838133    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/no-preload-121600/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-301285 --alsologtostderr -v=3: (1m22.725337756s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (82.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-301285 -n embed-certs-301285
E1216 03:42:03.160251    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/no-preload-121600/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-301285 -n embed-certs-301285: exit status 7 (57.476087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-301285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-301285 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2
E1216 03:42:03.802228    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/no-preload-121600/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:05.083845    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/no-preload-121600/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:07.645957    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/no-preload-121600/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:08.734604    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/custom-flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:09.104639    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/old-k8s-version-738304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:12.767373    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/no-preload-121600/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:13.746737    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/default-k8s-diff-port-136230/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:13.753193    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/default-k8s-diff-port-136230/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:13.764555    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/default-k8s-diff-port-136230/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:13.785958    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/default-k8s-diff-port-136230/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:13.827462    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/default-k8s-diff-port-136230/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:13.908909    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/default-k8s-diff-port-136230/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:14.070494    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/default-k8s-diff-port-136230/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:14.391748    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/default-k8s-diff-port-136230/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:15.033243    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/default-k8s-diff-port-136230/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:16.314867    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/default-k8s-diff-port-136230/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:18.876636    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/default-k8s-diff-port-136230/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:23.009322    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/no-preload-121600/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:23.998761    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/default-k8s-diff-port-136230/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:34.240599    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/default-k8s-diff-port-136230/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:43.491050    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/no-preload-121600/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-301285 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.2: (43.903635766s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-301285 -n embed-certs-301285
E1216 03:42:47.402079    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/auto-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-mtsxh" [f843f5d6-e0f3-455f-9a71-bd1b6dacb87c] Running
E1216 03:42:48.701460    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/enable-default-cni-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:50.066778    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/old-k8s-version-738304/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003529774s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-mtsxh" [f843f5d6-e0f3-455f-9a71-bd1b6dacb87c] Running
E1216 03:42:54.722025    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/default-k8s-diff-port-136230/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:57.984394    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kindnet-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:42:58.016829    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/flannel-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004033838s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-301285 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-301285 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-301285 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-301285 -n embed-certs-301285
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-301285 -n embed-certs-301285: exit status 2 (201.957662ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-301285 -n embed-certs-301285
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-301285 -n embed-certs-301285: exit status 2 (203.088612ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-301285 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-301285 -n embed-certs-301285
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-301285 -n embed-certs-301285
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.32s)

                                                
                                    

Test skip (52/431)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0.29
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
151 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
152 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
153 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
154 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
155 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
157 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
158 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0.01
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService 0.01
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0.01
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
371 TestNetworkPlugins/group/kubenet 3.22
379 TestNetworkPlugins/group/cilium 3.97
400 TestStartStop/group/disable-driver-mounts 0.23
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-703051 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-079027 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-079027

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-079027

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-079027

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-079027

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-079027

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-079027

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-079027

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-079027

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-079027

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-079027

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-079027

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-079027" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-079027" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:27:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.166:8443
name: kubernetes-upgrade-352947
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:27:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.109:8443
name: running-upgrade-418673
contexts:
- context:
cluster: kubernetes-upgrade-352947
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:27:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-352947
name: kubernetes-upgrade-352947
- context:
cluster: running-upgrade-418673
user: running-upgrade-418673
name: running-upgrade-418673
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-352947
user:
client-certificate: /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kubernetes-upgrade-352947/client.crt
client-key: /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kubernetes-upgrade-352947/client.key
- name: running-upgrade-418673
user:
client-certificate: /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/running-upgrade-418673/client.crt
client-key: /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/running-upgrade-418673/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-079027

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-079027"

                                                
                                                
----------------------- debugLogs end: kubenet-079027 [took: 3.054313791s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-079027" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-079027
--- SKIP: TestNetworkPlugins/group/kubenet (3.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-079027 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-079027

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-079027

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-079027

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-079027

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-079027

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-079027

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-079027

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-079027

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-079027

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-079027

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-079027

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-079027" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-079027

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-079027

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-079027

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-079027

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-079027" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-079027" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:27:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.166:8443
name: kubernetes-upgrade-352947
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22158-5036/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:27:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.109:8443
name: running-upgrade-418673
contexts:
- context:
cluster: kubernetes-upgrade-352947
extensions:
- extension:
last-update: Tue, 16 Dec 2025 03:27:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-352947
name: kubernetes-upgrade-352947
- context:
cluster: running-upgrade-418673
user: running-upgrade-418673
name: running-upgrade-418673
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-352947
user:
client-certificate: /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kubernetes-upgrade-352947/client.crt
client-key: /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/kubernetes-upgrade-352947/client.key
- name: running-upgrade-418673
user:
client-certificate: /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/running-upgrade-418673/client.crt
client-key: /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/running-upgrade-418673/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-079027

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-079027" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-079027"

                                                
                                                
----------------------- debugLogs end: cilium-079027 [took: 3.817815243s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-079027" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-079027
--- SKIP: TestNetworkPlugins/group/cilium (3.97s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-530965" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-530965
E1216 03:39:00.629811    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/calico-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1216 03:39:00.636338    8974 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22158-5036/.minikube/profiles/calico-079027/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard